996 resultados para State constraints


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have calculated 90% confidence limits on the steady-state rate of catastrophic disruptions of main belt asteroids in terms of the absolute magnitude at which one catastrophic disruption occurs per year  as a function of the post-disruption increase in brightness (Δm) and subsequent brightness decay rate (τ  ). The confidence limits were calculated using the brightest unknown main belt asteroid (V=18.5) detected with the Pan-STARRS1 (Pan-STARRS1) telescope. We measured the Pan-STARRS1’s catastrophic disruption detection efficiency over a 453-day interval using the Pan-STARRS moving object processing system (MOPS) and a simple model for the catastrophic disruption event’s photometric behavior in a small aperture centered on the catastrophic disruption event. We then calculated the  contours in the ranges from  and  encompassing measured values from known cratering and disruption events and our model’s predictions. Our simplistic catastrophic disruption model suggests that  and  which would imply that H0≳28—strongly inconsistent withH0,B2005=23.26±0.02 predicted by Bottke et al. (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H.F. [2005]. Icarus, 179, 63–94.) using purely collisional models. However, if we assume that H0=H0,B2005 our results constrain , inconsistent with our simplistic impact-generated catastrophic disruption model. We postulate that the solution to the discrepancy is that >99% of main belt catastrophic disruptions in the size range to which this study was sensitive (∼100 m) are not impact-generated, but are instead due to fainter rotational breakups, of which the recent discoveries of disrupted asteroids P/2013 P5 and P/2013 R3 are probable examples. We estimate that current and upcoming asteroid surveys may discover up to 10 catastrophic disruptions/year brighter than V=18.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of learning Bayesian network structures from data based on score functions that are decomposable. It describes properties that strongly reduce the time and memory costs of many known methods without losing global optimality guarantees. These properties are derived for different score criteria such as Minimum Description Length (or Bayesian Information Criterion), Akaike Information Criterion and Bayesian Dirichlet Criterion. Then a branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality. As an example, structural constraints are used to map the problem of structure learning in Dynamic Bayesian networks into a corresponding augmented Bayesian network. Finally, we show empirically the benefits of using the properties with state-of-the-art methods and with the new algorithm, which is able to handle larger data sets than before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present grizP1 light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the Hubble Space Telescope Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only supernovae (SNe) and assuming a constant dark energy equation of state and flatness, yields w = -1.120+0.360-0.206(Stat)+0.2690.291(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields ΩM = 0.280+0.0130.012 and w = -1.166+0.072-0.069 including all identified systematics. The value of w is inconsistent with the cosmological constant value of -1 at the 2.3σ level. Tension endures after removing either the baryon acoustic oscillation (BAO) or the H0 constraint, though it is strongest when including the H0 constraint. If we include WMAP9 cosmic microwave background (CMB) constraints instead of those from Planck, we find w = -1.124+0.083-0.065, which diminishes the discord to <2σ. We cannot conclude whether the tension with flat ΛCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 SN sample with ∼three times as many SNe should provide more conclusive results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this brief, a hybrid filter algorithm is developed to deal with the state estimation (SE) problem for power systems by taking into account the impact from the phasor measurement units (PMUs). Our aim is to include PMU measurements when designing the dynamic state estimators for power systems with traditional measurements. Also, as data dropouts inevitably occur in the transmission channels of traditional measurements from the meters to the control center, the missing measurement phenomenon is also tackled in the state estimator design. In the framework of extended Kalman filter (EKF) algorithm, the PMU measurements are treated as inequality constraints on the states with the aid of the statistical criterion, and then the addressed SE problem becomes a constrained optimization one based on the probability-maximization method. The resulting constrained optimization problem is then solved using the particle swarm optimization algorithm together with the penalty function approach. The proposed algorithm is applied to estimate the states of the power systems with both traditional and PMU measurements in the presence of probabilistic data missing phenomenon. Extensive simulations are carried out on the IEEE 14-bus test system and it is shown that the proposed algorithm gives much improved estimation performances over the traditional EKF method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

‘Empowerment’ is a term much used by policy-makers with an interest in improving service delivery and promoting different forms of neighbourhood governance. But the term is ambiguous and has no generally accepted definition. Indeed, there is a growing paradox between the rhetoric of community empowerment and an apparent shift towards increased centralisation of power away from the neighbourhood in developed economies. This article explores the literature relating to empowerment and identifies two broad conceptions which reflect different emphases on neo-liberalism. It goes on to discuss two models illustrating different levels of state intervention at the neighbourhood level and sets out evidence from two neighbourhood councils in Milton Keynes in central England. In conclusion, it is argued that those initiatives which are top-down, state-led policy initiatives tend to result in the least empowerment (as defined by government), whereas the bottom-up, self-help projects, which may be partly state-enabled, at least provide an opportunity to create the spaces where there is some potential for varying degrees of transformation. Further empirical research is needed to test how far localist responses can challenge constraints on empowerment imposed by neo-liberalism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book explains why it was possible for the Worker’s Party (PT) in Brazil and the African National Congress (ANC) in South Africa to pursue a developmental state trade policy, in spite of neoliberal constraints. The major theoretical lenses are three-fold. It applies state theory (macrolevel), policy network analysis (meso-level) and theories on political parties with emphasis on factional politics (micro-level). This book highlights the socio-political relevance of comparatively progressive policy frameworks and expands the debate on how to re-gain national policy space for progressive reform policies even under neoliberal constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formulation of four-dimensional variational data assimilation allows the incorporation of constraints into the cost function which need only be weakly satisfied. In this paper we investigate the value of imposing conservation properties as weak constraints. Using the example of the two-body problem of celestial mechanics we compare weak constraints based on conservation laws with a constraint on the background state.We show how the imposition of conservation-based weak constraints changes the nature of the gradient equation. Assimilation experiments demonstrate how this can add extra information to the assimilation process, even when the underlying numerical model is conserving.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combination of model predictive control based on linear models (MPC) with feedback linearization (FL) has attracted interest for a number of years, giving rise to MPC+FL control schemes. An important advantage of such schemes is that feedback linearizable plants can be controlled with a linear predictive controller with a fixed model. Handling input constraints within such schemes is difficult since simple bound contraints on the input become state dependent because of the nonlinear transformation introduced by feedback linearization. This paper introduces a technique for handling input constraints within a real time MPC/FL scheme, where the plant model employed is a class of dynamic neural networks. The technique is based on a simple affine transformation of the feasible area. A simulated case study is presented to illustrate the use and benefits of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel optimising controller is designed that leads a slow process from a sub-optimal operational condition to the steady-state optimum in a continuous way based on dynamic information. Using standard results from optimisation theory and discrete optimal control, the solution of a steady-state optimisation problem is achieved by solving a receding-horizon optimal control problem which uses derivative and state information from the plant via a shadow model and a state-space identifier. The paper analyzes the steady-state optimality of the procedure, develops algorithms with and without control rate constraints and applies the procedure to a high fidelity simulation study of a distillation column optimisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parameterization schemes for the drag due to atmospheric gravity waves are discussed and compared in the context of a simple one-dimensional model of the quasi-biennial oscillation (QBO). A number of fundamental issues are examined in detail, with the goal of providing a better understanding of the mechanism by which gravity wave drag can produce an equatorial zonal wind oscillation. The gravity wave–driven QBOs are compared with those obtained from a parameterization of equatorial planetary waves. In all gravity wave cases, it is seen that the inclusion of vertical diffusion is crucial for the descent of the shear zones and the development of the QBO. An important difference between the schemes for the two types of waves is that in the case of equatorial planetary waves, vertical diffusion is needed only at the lowest levels, while for the gravity wave drag schemes it must be included at all levels. The question of whether there is downward propagation of influence in the simulated QBOs is addressed. In the gravity wave drag schemes, the evolution of the wind at a given level depends on the wind above, as well as on the wind below. This is in contrast to the parameterization for the equatorial planetary waves in which there is downward propagation of phase only. The stability of a zero-wind initial state is examined, and it is determined that a small perturbation to such a state will amplify with time to the extent that a zonal wind oscillation is permitted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-assembly of proteins and peptides into b-sheet-rich amyloid fibers is a process that has gained notoriety because of its association with human diseases and disorders. Spontaneous self-assembly of peptides into nonfibrillar supramolecular structures can also provide a versatile and convenient mechanism for the bottom-up design of biocompatible materials with functional properties favoring a wide range of practical applications.[1] One subset of these fascinating and potentially useful nanoscale constructions are the peptide nanotubes, elongated cylindrical structures with a hollow center bounded by a thin wall of peptide molecules.[2] A formidable challenge in optimizing and harnessing the properties of nanotube assemblies is to gain atomistic insight into their architecture, and to elucidate precisely how the tubular morphology is constructed from the peptide building blocks. Some of these fine details have been elucidated recently with the use of magic-angle-spinning (MAS) solidstate NMR (SSNMR) spectroscopy.[3] MAS SSNMR measurements of chemical shifts and through-space interatomic distances provide constraints on peptide conformation (e.g., b-strands and turns) and quaternary packing. We describe here a new application of a straightforward SSNMR technique which, when combined with FTIR spectroscopy, reports quantitatively on the orientation of the peptide molecules within the nanotube structure, thereby providing an additional structural constraint not accessible to MAS SSNMR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.