923 resultados para Symmetric cipher
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.
Resumo:
Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society
Resumo:
An aquaplanet model is used to study the nature of the highly persistent low-frequency waves that have been observed in models forced by zonally symmetric boundary conditions. Using the Hayashi spectral analysis of the extratropical waves, the authors find that a quasi-stationary wave 5 belongs to a wave packet obeying a well-defined dispersion relation with eastward group velocity. The components of the dispersion relation with k ≥ 5 baroclinically convert eddy available potential energy into eddy kinetic energy, whereas those with k < 5 are baroclinically neutral. In agreement with Green’s model of baroclinic instability, wave 5 is weakly unstable, and the inverse energy cascade, which had been previously proposed as a main forcing for this type of wave, only acts as a positive feedback on its predominantly baroclinic energetics. The quasi-stationary wave is reinforced by a phase lock to an analogous pattern in the tropical convection, which provides further amplification to the wave. It is also found that the Pedlosky bounds on the phase speed of unstable waves provide guidance in explaining the latitudinal structure of the energy conversion, which is shown to be more enhanced where the zonal westerly surface wind is weaker. The wave’s energy is then trapped in the waveguide created by the upper tropospheric jet stream. In agreement with Green’s theory, as the equator-to-pole SST difference is reduced, the stationary marginally stable component shifts toward higher wavenumbers, while wave 5 becomes neutral and westward propagating. Some properties of the aquaplanet quasi-stationary waves are found to be in interesting agreement with a low frequency wave observed by Salby during December–February in the Southern Hemisphere so that this perspective on low frequency variability, apart from its value in terms of basic geophysical fluid dynamics, might be of specific interest for studying the earth’s atmosphere.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
This is a study of singular solutions of the problem of traveling gravity water waves on flows with vorticity. We show that, for a certain class of vorticity functions, a sequence of regular waves converges to an extreme wave with stagnation points at its crests. We also show that, for any vorticity function, the profile of an extreme wave must have either a corner of 120° or a horizontal tangent at any stagnation point about which it is supposed symmetric. Moreover, the profile necessarily has a corner of 120° if the vorticity is nonnegative near the free surface.
Resumo:
Previous studies have made use of simplified general circulation models (sGCMs) to investigate the atmospheric response to various forcings. In particular, several studies have investigated the tropospheric response to changes in stratospheric temperature. This is potentially relevant for many climate forcings. Here the impact of changing the tropospheric climatology on the modeled response to perturbations in stratospheric temperature is investigated by the introduction of topography into the model and altering the tropospheric jet structure. The results highlight the need for very long integrations so as to determine accurately the magnitude of response. It is found that introducing topography into the model and thus removing the zonally symmetric nature of the model’s boundary conditions reduces the magnitude of response to stratospheric heating. However, this reduction is of comparable size to the variability in the magnitude of response between different ensemble members of the same 5000-day experiment. Investigations into the impact of varying tropospheric jet structure reveal a trend with lower-latitude/narrower jets having a much larger magnitude response to stratospheric heating than higher-latitude/wider jets. The jet structures that respond more strongly to stratospheric heating also exhibit longer time scale variability in their control run simulations, consistent with the idea that a feedback between the eddies and the mean flow is both responsible for the persistence of the control run variability and important in producing the tropospheric response to stratospheric temperature perturbations.
Resumo:
The existence of sting jets as a potential source of damaging surface winds during the passage of extratropical cyclones has recently been recognized However, there are still very few published studies on the subject Furthermore, although ills known that other models are capable of reproducing sting jets, in the published literature only one numerical model [the Met Office Unified Model (MetUM)] has been used to numerically analyze these phenomena This article alms to improve our understanding of the processes that contribute to the development of sting jets and show that model differences affect the evolution of modeled sting jets A sting jet event during the passage of a cyclone over the United Kingdom on 26 February 2002 has been simulated using two mesoscale models namely the MetUM and the Consortium for Small Scale Modeling (COSMO) model to compare their performance Given the known critical importance of vertical resolution in the simulation of sting jets the vertical resolution of both models has been enhanced with respect to their operational versions Both simulations have been verified against surface measurements of maximum gusts, satellite imagery and Met Office operational synoptic analyses, as well as operational analyses from the ECMWF It is shown that both models are capable of reproducing sting jets with similar, though not identical. features Through the comparison of the results from these two models, the relevance of physical mechanisms, such as evaporative cooling and the release of conditional symmetric instability, in the generation and evolution of sting jets is also discussed
Resumo:
The extraction of design data for the lowpass dielectric multilayer according to Tschebysheff performance is described. The extraction proceeds initially by analogy with electric-circuit design, and can then be given numerical refinement which is also described. Agreement with the Tschebysheff desideratum is satisfactory. The multilayers extracted by this procedure are of fractional thickness, symmetric with regard to their central layers.
Resumo:
The preparation and comprehensive characterization of a series of homoleptic sandwich complexes containing diphosphacyclobutadiene ligands are reported. Compounds [K([18]crown-6)(thf)2][Fe(hapto4-P2C2tBu2)2] (K1), [K([18]crown-6)(thf)2][C(h4-P2C2tBu2)2] (K2), and [K([18]crown-6)(thf)2][Co(hapto4-P2C2Ad2)2] (K3, Ad=adamantyl) were obtained from reactions of [K([18crown-6)(thf)2][M(hapto4-C14H10)2] (M=Fe, Co) with tBuCP (1, 2), or with AdCP (3). Neutral sandwiches [M(hapto4-P2C2tBu2)2] (4: M=Fe 5: M=Co) were obtained by oxidizing 1 and 2 with [Cp2Fe]PF6. Cyclic voltammetry and spectro-electrochemistry indicate that the two [M(hapto4-P2C2tBu2)2]-/[M(hapto4-P2C2tBu2)2] moieties can be reversibly interconverted by one electron oxidation and reduction, respectively. Complexes 1–5 were characterized by multinuclear NMR, EPR (1 and 5), UV/Vis,and Moessbauer spectroscopies (1 and 4), mass spectrometry (4 and 5), and microanalysis (1–3). The molecular structures of 1–5 were determined by using X-ray crystallography. Essentially D2d-symmetric structures were found for all five complexes, which show the two 1,3-diphosphacyclobutadiene rings in a staggered orientation. Density functional theory calculations revealed the importance of covalent metal–ligand pi bonding in 1–5. Possible oxidation state assignments for the metal ions are discussed.
Resumo:
Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.
Resumo:
Earlier estimates of the City of London office market are extended by considering a longer time series of data, covering two cycles, and by explicitly modeling of asymmetric space market responses to employment and supply shocks. A long run structural model linking real rental levels, office-based employment and the supply of office space is estimated and then rental adjustment processes are modeled using an error correction model framework. Rental adjustment is seen to be asymmetric, depending both on the direction of the supply and demand shocks and on the state of the space market at the time of the shock. Vacancy adjustment does not display asymmetries. There is also a supply adjustment equation. Two three-equation systems, one with symmetric rental adjustment and the other with asymmetric adjustment, are subjected to positive and negative shocks to employment. These illustrate differences in the two systems.
Resumo:
A fundamental principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: (i) the underlying data generating mechanism exhibits known symmetric property, and (ii) the underlying process obeys a set of given boundary value constraints. The class of efficient orthogonal least squares regression algorithms can readily be applied without any modification to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
The effect of polydispersity on an AB diblock copolymer melt is investigated using latticebased Monte Carlo simulations. We consider melts of symmetric composition, where the B blocks are monodisperse and the A blocks are polydisperse with a Schultz-Zimm distribution. In agreement with experiment and self-consistent field theory (SCFT), we find that polydispersity causes a significant increase in domain size. It also induces a transition from flat to curved interfaces, with the polydisperse blocks residing on the inside of the interfacial curvature. Most importantly, the simulations show a relatively small shift in the order-disorder transition (ODT) in agreement with experiment, whereas SCFT incorrectly predicts a sizable shift towards higher temperatures.
Resumo:
In the ordered state, symmetric diblock copolymers self-assemble into an anisotropic lamellar morphology. The equilibrium thickness of the lamellae is the result of a delicate balance between enthalpic and entropic energies, which can be tuned by controlling the temperature. Here we devise a simple yet powerful method of detecting tiny changes in the lamellar thickness using optical microscopy. From such measurements we characterize the enthalpic interaction as well as the kinetics of molecules as they hop from one layer to the next in order to adjust the lamellar thickness in response to a temperature jump. The resolution of the measurements facilitate a direct comparison to predictions from self-consistent field theory.
Resumo:
It is widely accepted that equity return volatility increases more following negative shocks rather than positive shocks. However, much of value-at-risk (VaR) analysis relies on the assumption that returns are normally distributed (a symmetric distribution). This article considers the effect of asymmetries on the evaluation and accuracy of VaR by comparing estimates based on various models.