911 resultados para Generalized Concatenated Codes
Resumo:
We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems. In this setting, recent works have shown how to get a statistics of extremes in agreement with the classical Extreme Value Theory. We pursue these investigations by giving analytical expressions of Extreme Value distribution parameters for maps that have an absolutely continuous invariant measure. We compare these analytical results with numerical experiments in which we study the convergence to limiting distributions using the so called block-maxima approach, pointing out in which cases we obtain robust estimation of parameters. In regular maps for which mixing properties do not hold, we show that the fitting procedure to the classical Extreme Value Distribution fails, as expected. However, we obtain an empirical distribution that can be explained starting from a different observable function for which Nicolis et al. (Phys. Rev. Lett. 97(21): 210602, 2006) have found analytical results.
Resumo:
The concept of “distance to instability” of a system matrix is generalized to system pencils which arise in descriptor (semistate) systems. Difficulties arise in the case of singular systems, because the pencil can be made unstable by an infinitesimal perturbation. It is necessary to measure the distance subject to restricted, or structured, perturbations. In this paper a suitable measure for the stability radius of a generalized state-space system is defined, and a computable expression for the distance to instability is derived for regular pencils of index less than or equal to one. For systems which are strongly controllable it is shown that this measure is related to the sensitivity of the poles of the system over all feedback matrices assigning the poles.
Resumo:
Robustness in multi-variable control system design requires that the solution to the design problem be insensitive to perturbations in the system data. In this paper we discuss measures of robustness for generalized state-space, or descriptor, systems and describe algorithmic techniques for optimizing robustness for various applications.
Resumo:
Purpose – UK Government policy to address perceived market failure in commercial property leasing has largely been pursued through industry self-regulation. Yet, it is proving difficult to assess whether self-regulation on leasing has been a “success”, or even to determine how to evaluate this. The purpose of this paper is to provide a framework for this and a clearer understanding of self-regulation in commercial leasing. Design/methodology/approach – A literature review suggests key criteria to explain the (in)effectiveness of self-regulation. UK lease codes are analysed in the light of this literature, drawing on previous research carried out by the authors on the operation of these codes. Findings – Lease codes appear to be failing as an effective system of self-regulation. While there are influential market actors championing them, the fragmentation of the leasing process lessens this influence. The structures are not there to ensure implementation, monitor compliance and record views of affected stakeholders. Research limitations/implications – This work adds to the literature on self-regulation in general, and provides an insight into its operation in a previously unexplored industry. Research is needed into the experience of other countries in regulating the property industry by voluntary means. Social implications – There are institutional limitations to self-regulation within the property industry. This has implications for policy makers in considering the advantages and limitation of using a voluntary solution to achieve policy aims within the commercial leasing market. Originality/value – This paper provides a first step in considering the lease codes in the wider context of industry self-regulation and is relevant to policy makers and industry bodies.
Resumo:
There has been considerable interest in the climate impact of trends in stratospheric water vapor (SWV). However, the representation of the radiative properties of water vapor under stratospheric conditions remains poorly constrained across different radiation codes. This study examines the sensitivity of a detailed line-by-line (LBL) code, a Malkmus narrow-band model and two broadband GCM radiation codes to a uniform perturbation in SWV in the longwave spectral region. The choice of sampling rate in wave number space (Δν) in the LBL code is shown to be important for calculations of the instantaneous change in heating rate (ΔQ) and the instantaneous longwave radiative forcing (ΔFtrop). ΔQ varies by up to 50% for values of Δν spanning 5 orders of magnitude, and ΔFtrop varies by up to 10%. In the three less detailed codes, ΔQ differs by up to 45% at 100 hPa and 50% at 1 hPa compared to a LBL calculation. This causes differences of up to 70% in the equilibrium fixed dynamical heating temperature change due to the SWV perturbation. The stratosphere-adjusted radiative forcing differs by up to 96% across the less detailed codes. The results highlight an important source of uncertainty in quantifying and modeling the links between SWV trends and climate.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
Sufficient conditions are derived for the linear stability with respect to zonally symmetric perturbations of a steady zonal solution to the nonhydrostatic compressible Euler equations on an equatorial � plane, including a leading order representation of the Coriolis force terms due to the poleward component of the planetary rotation vector. A version of the energy–Casimir method of stability proof is applied: an invariant functional of the Euler equations linearized about the equilibrium zonal flow is found, and positive definiteness of the functional is shown to imply linear stability of the equilibrium. It is shown that an equilibrium is stable if the potential vorticity has the same sign as latitude and the Rayleigh centrifugal stability condition that absolute angular momentum increase toward the equator on surfaces of constant pressure is satisfied. The result generalizes earlier results for hydrostatic and incompressible systems and for systems that do not account for the nontraditional Coriolis force terms. The stability of particular equilibrium zonal velocity, entropy, and density fields is assessed. A notable case in which the effect of the nontraditional Coriolis force is decisive is the instability of an angular momentum profile that decreases away from the equator but is flatter than quadratic in latitude, despite its satisfying both the centrifugal and convective stability conditions.
Resumo:
In this paper a generalization of collectively compact operator theory in Banach spaces is developed. A feature of the new theory is that the operators involved are no longer required to be compact in the norm topology. Instead it is required that the image of a bounded set under the operator family is sequentially compact in a weaker topology. As an application, the theory developed is used to establish solvability results for a class of systems of second kind integral equations on unbounded domains, this class including in particular systems of Wiener-Hopf integral equations with L1 convolutions kernels
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Purpose – This paper describes visitors' reactions to using an Apple iPad or smartphone to follow trails in a museum by scanning QR codes and draws conclusions on the potential for this technology to help improve accessibility at low-cost. Design/methodology/approach – Activities were devised which involved visitors following trails around museum objects, each labelled with a QR code and symbolised text. Visitors scanned the QR codes using a mobile device which then showed more information about an object. Project-team members acted as participant-observers, engaging with visitors and noting how they used the system. Experiences from each activity fed into the design of the next. Findings – Some physical and technical problems with using QR codes can be overcome with the introduction of simple aids, particularly using movable object labels. A layered approach to information access is possible with the first layer comprising a label, the second a mobile-web enabled screen and the third choices of text, pictures, video and audio. Video was especially appealing to young people. The ability to repeatedly watch video or listen to audio seemed to be appreciated by visitors with learning disabilities. This approach can have low equipment-cost. However, maintaining the information behind labels and keeping-up with technological changes are on-going processes. Originality/value – Using QR codes on movable, symbolised object labels as part of a layered information system might help modestly-funded museums enhance their accessibility, particularly as visitors increasingly arrive with their own smartphones or tablets.
Resumo:
In a series of papers, Killworth and Blundell have proposed to study the effects of a background mean flow and topography on Rossby wave propagation by means of a generalized eigenvalue problem formulated in terms of the vertical velocity, obtained from a linearization of the primitive equations of motion. However, it has been known for a number of years that this eigenvalue problem contains an error, which Killworth was prevented from correcting himself by his unfortunate passing and whose correction is therefore taken up in this note. Here, the author shows in the context of quasigeostrophic (QG) theory that the error can ulti- mately be traced to the fact that the eigenvalue problem for the vertical velocity is fundamentally a non- linear one (the eigenvalue appears both in the numerator and denominator), unlike that for the pressure. The reason that this nonlinear term is lacking in the Killworth and Blundell theory comes from neglecting the depth dependence of a depth-dependent term. This nonlinear term is shown on idealized examples to alter significantly the Rossby wave dispersion relation in the high-wavenumber regime but is otherwise irrelevant in the long-wave limit, in which case the eigenvalue problems for the vertical velocity and pressure are both linear. In the general dispersive case, however, one should first solve the generalized eigenvalue problem for the pressure vertical structure and, if needed, diagnose the vertical velocity vertical structure from the latter.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.