39 resultados para unity
Resumo:
Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
Bringing together a range of little-considered materials, this article assesses the portrayal of Persia in seventeenth-century travel literature and drama. In particular it argues that such texts use their awareness of Islamic sectarian division to portray Persia as a good potential trading partner in preference to the Ottoman Empire. A close reading of John Day, William Rowley and George Wilkins’ The Travailes of the Three English Brothers (1607) demonstrates how the play develops a fantasy model of how relations between Persia and England might function. The potential unity between England and Persia, imagined in terms of both religion and trade, demonstrates how Persia figured as a model ‘other England’ in early modern literature.
Resumo:
This paper evaluates the implications of Osama bin Ladin’s death for the future of al-Qaeda’s global jihad. It critically examines the debate as to the make-up of the group and identifies bin Ladin’s primary role as chief ideologue advocating a defensive jihad to liberate the umma. The rationale and appeal of bin Ladin’s message and Muslims’ reaction to both his statements and al-Qaeda’s increasing use of sectarian violence are assessed in the context of Pan-Islam as political ideology. The paper concludes that while the ideal of Islamic unity and the sentiment of Muslim solidarity are unlikely to vanish, al-Qaeda’s violent jihad has not only failed to achieve these goals but has worked against it, thereby confining it to the political margins.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KM-GAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KM-GAP is based on the PRA model framework (Pöschl-Rudich-Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modelled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmo- spheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270 K is close to unity. Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for eðcient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
Near-perfect vector phase conjugation was achieved at 488 nm in a methyl red dye impregnated polymethylmethacrylate film by employing a temperature tuning technique. Using a degenerate four-wave mixing geometry with vertically polarized counterpropagating pump beams, intensity and polarization gratings were written in the dye/polymer system using a vertically or horizontally polarized weak probe beam. Over a limited temperature range, as the sample was heated, the probe reflectivity from the polarization grating dropped but the reflectivity from the intensity grating rose sharply. At a sample temperature of approximately 50°C, the reflectivities of the gratings were measured to be equal and we confirmed that, at this temperature, the measured vector phase conjugate fidelity was very close to unity. We discuss a possible explanation of this effect.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
The single scattering albedo w_0l in atmospheric radiative transfer is the ratio of the scattering coefficient to the extinction coefficient. For cloud water droplets both the scattering and absorption coefficients, thus the single scattering albedo, are functions of wavelength l and droplet size r. This note shows that for water droplets at weakly absorbing wavelengths, the ratio w_0l(r)/w_0l(r0) of two single scattering albedo spectra is a linear function of w_0l(r). The slope and intercept of the linear function are wavelength independent and sum to unity. This relationship allows for a representation of any single scattering albedo spectrum w_0l(r) via one known spectrum w_0l(r0). We provide a simple physical explanation of the discovered relationship. Similar linear relationships were found for the single scattering albedo spectra of non-spherical ice crystals.
Resumo:
The sonnet in English is usually located as a sixteenth-century innovation, firmly linked to Italian influences, and frequently associated with a distinctively modern consciousness. Yet the speed and comfort with which the form settled into English reflects the fact that the sonnet per se was preceded by a longstanding tradition of 14-line poems in English written in forms derived from French. Indeed, in terms of formal features, the earliest sonnets in English frequently fray into earlier forms, sharing more with the roundel than with later sonnets. This article considers a number of features of style and content that various writers on the sonnet have argued to be characteristic, sometimes definitive, of the sonnet. These features include repetition, formal unity/division of octave and sestet, use of the volta, asymmetry, argument and development, and a preoccupation with contradictions and the self. The article shows that, while it is true that these features are characteristic of many sonnets, they are not peculiarly characteristic of sonnets, and they can all be found in earlier 14-line poems. Furthermore, a number of the earliest sonnets in English do not themselves possess these ‘sonnet-like’ characteristics. The otherness and the modernity of the sonnet have thus been overstated.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
The spatial structure and phase velocity of tropopause disturbances localized around the subpolar jet in the Southern Hemisphere are investigated using 6-hourly European Centre for Medium-Range Weather Forecasts reanalysis data covering 15 yr (1979–93). The phase velocity and phase structure of the tropopause disturbances are in good agreement with those of an edge wave vertically trapped at the tropopause. However, the vertical distribution of the ratio of potential to kinetic energy exhibits maxima above and below the tropopause and a minimum around the tropopause, in contradiction to edge wave theory for which the ratio is unity throughout the troposphere and stratosphere. This difference in vertical structure between the observed tropopause disturbances and edge wave theory is attributed to the effects of a finite-depth tropopause together with the next-order corrections in Rossby number to quasigeostrophic dynamics
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
The problem of scattering of time-harmonic acoustic waves by an inhomogeneous fluid layer on a rigid plate in R2 is considered. The density is assumed to be unity in the media: within the layer the sound speed is assumed to be an arbitrary bounded measurable function. The problem is modelled by the reduced wave equation with variable wavenumber in the layer and a Neumann condition on the plate. To formulate the problem and prove uniqueness of solution a radiation condition appropriate for scattering by infinite rough surfaces is introduced, a generalization of the Rayleigh expansion condition for diffraction gratings. With the help of the radiation condition the problem is reformulated as a system of two second kind integral equations over the layer and the plate. Under additional assumptions on the wavenumber in the layer, uniqueness of solution is proved and the nonexistence of guided wave solutions of the homogeneous problem established. General results on the solvability of systems of integral equations on unbounded domains are used to establish existence and continuous dependence in a weighted norm of the solution on the given data.