62 resultados para recombinatory generalization
Resumo:
Consider the Dirichlet boundary value problem for the Helmholtz equation in a non-locally perturbed half-plane with an unbounded, piecewise Lyapunov boundary. This problem models time-harmonic electromagnetic scattering in transverse magnetic polarization by one-dimensional rough, perfectly conducting surfaces. A radiation condition is introduced for the problem, which is a generalization of the usual one used in the study of diffraction by gratings when the solution is quasi-periodic, and allows a variety of incident fields including an incident plane wave to be included in the results obtained. We show in this paper that the boundary value problem for the scattered field has at most one solution. For the case when the whole boundary is Lyapunov and is a small perturbation of a flat boundary we also prove existence of solution and show a limiting absorption principle.
Resumo:
The problem of scattering of time-harmonic acoustic waves by an inhomogeneous fluid layer on a rigid plate in R2 is considered. The density is assumed to be unity in the media: within the layer the sound speed is assumed to be an arbitrary bounded measurable function. The problem is modelled by the reduced wave equation with variable wavenumber in the layer and a Neumann condition on the plate. To formulate the problem and prove uniqueness of solution a radiation condition appropriate for scattering by infinite rough surfaces is introduced, a generalization of the Rayleigh expansion condition for diffraction gratings. With the help of the radiation condition the problem is reformulated as a system of two second kind integral equations over the layer and the plate. Under additional assumptions on the wavenumber in the layer, uniqueness of solution is proved and the nonexistence of guided wave solutions of the homogeneous problem established. General results on the solvability of systems of integral equations on unbounded domains are used to establish existence and continuous dependence in a weighted norm of the solution on the given data.
Resumo:
We prove unique existence of solution for the impedance (or third) boundary value problem for the Helmholtz equation in a half-plane with arbitrary L∞ boundary data. This problem is of interest as a model of outdoor sound propagation over inhomogeneous flat terrain and as a model of rough surface scattering. To formulate the problem and prove uniqueness of solution we introduce a novel radiation condition, a generalization of that used in plane wave scattering by one-dimensional diffraction gratings. To prove existence of solution and a limiting absorption principle we first reformulate the problem as an equivalent second kind boundary integral equation to which we apply a form of Fredholm alternative, utilizing recent results on the solvability of integral equations on the real line in [5].
Resumo:
Rigorous upper bounds are derived that limit the finite-amplitude growth of arbitrary nonzonal disturbances to an unstable baroclinic zonal flow in a continuously stratified, quasi-geostrophic, semi-infinite fluid. Bounds are obtained bath on the depth-integrated eddy potential enstrophy and on the eddy available potential energy (APE) at the ground. The method used to derive the bounds is essentially analogous to that used in Part I of this study for the two-layer model: it relies on the existence of a nonlinear Liapunov (normed) stability theorem, which is a finite-amplitude generalization of the Charney-Stern theorem. As in Part I, the bounds are valid both for conservative (unforced, inviscid) flow, as well as for forced-dissipative flow when the dissipation is proportional to the potential vorticity in the interior, and to the potential temperature at the ground. The character of the results depends on the dimensionless external parameter γ = f02ξ/β0N2H, where ξ is the maximum vertical shear of the zonal wind, H is the density scale height, and the other symbols have their usual meaning. When γ ≫ 1, corresponding to “deep” unstable modes (vertical scale ≈H), the bound on the eddy potential enstrophy is just the total potential enstrophy in the system; but when γ≪1, corresponding to ‘shallow’ unstable modes (vertical scale ≈γH), the eddy potential enstrophy can be bounded well below the total amount available in the system. In neither case can the bound on the eddy APE prevent a complete neutralization of the surface temperature gradient which is in accord with numerical experience. For the special case of the Charney model of baroclinic instability, and in the limit of infinitesimal initial eddy disturbance amplitude, the bound states that the dimensionless eddy potential enstrophy cannot exceed (γ + 1)2/24&gamma2h when γ ≥ 1, or 1/6;&gammah when γ ≤ 1; here h = HN/f0L is the dimensionless scale height and L is the width of the channel. These bounds are very similar to (though of course generally larger than) ad hoc estimates based on baroclinic-adjustment arguments. The possibility of using these kinds of bounds for eddy-amplitude closure in a transient-eddy parameterization scheme is also discussed.
Resumo:
A rigorous bound is derived which limits the finite-amplitude growth of arbitrary nonzonal disturbances to an unstable baroclinic zonal flow within the context of the two-layer model. The bound is valid for conservative (unforced) flow, as well as for forced-dissipative flow that when the dissipation is proportional to the potential vorticity. The method used to derive the bound relies on the existence of a nonlinear Liapunov (normed) stability theorem for subcritical flows, which is a finite-amplitude generalization of the Charney-Stern theorem. For the special case of the Philips model of baroclinic instability, and in the limit of infinitesimal initial nonzonal disturbance amplitude, an improved form of the bound is possible which states that the potential enstrophy of the nonzonal flow cannot exceed ϵβ2, where ϵ = (U − Ucrit)/Ucrit is the (relative) supereriticality. This upper bound turns out to be extremely similar to the maximum predicted by the weakly nonlinear theory. For unforced flow with ϵ < 1, the bound demonstrates that the nonzonal flow cannot contain all of the potential enstrophy in the system; hence in this range of initial supercriticality the total flow must remain, in a certain sense, “close” to a zonal state.
Resumo:
Background: Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches but the results have been equivocal. Moreover, the effect of therapy on characteristics of neologisms is less explored. Aims: This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis, PCA) on picture naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). Methods: FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviors was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e., rhymes, first sound, first sound associate, final sound, number of syllables) for each target word. Generalization effects—change in percent accuracy and error pattern—were examined comparing pre-and post-therapy responses on the Philadelphia Naming Test and these responses were analyzed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. Results: As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable to his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. Conclusions: This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat.
Resumo:
This paper is to present a model of spatial equilibrium using a nonlinear generalization of Markov-chain type model, and to show the dynamic stability of a unique equilibrium. Even at an equilibrium, people continue to migrate among regions as well as among agent-types, and yet their overall distribution remain unchanged. The model is also adapted to suggest a theory of traffic distribution in a city.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
Tests of the new Rossby wave theories that have been developed over the past decade to account for discrepancies between theoretical wave speeds and those observed by satellite altimeters have focused primarily on the surface signature of such waves. It appears, however, that the surface signature of the waves acts only as a rather weak constraint, and that information on the vertical structure of the waves is required to better discriminate between competing theories. Due to the lack of 3-D observations, this paper uses high-resolution model data to construct realistic vertical structures of Rossby waves and compares these to structures predicted by theory. The meridional velocity of a section at 24° S in the Atlantic Ocean is pre-processed using the Radon transform to select the dominant westward signal. Normalized profiles are then constructed using three complementary methods based respectively on: (1) averaging vertical profiles of velocity, (2) diagnosing the amplitude of the Radon transform of the westward propagating signal at different depths, and (3) EOF analysis. These profiles are compared to profiles calculated using four different Rossby wave theories: standard linear theory (SLT), SLT plus mean flow, SLT plus topographic effects, and theory including mean flow and topographic effects. Our results support the classical theoretical assumption that westward propagating signals have a well-defined vertical modal structure associated with a phase speed independent of depth, in contrast with the conclusions of a recent study using the same model but for different locations in the North Atlantic. The model structures are in general surface intensified, with a sign reversal at depth in some regions, notably occurring at shallower depths in the East Atlantic. SLT provides a good fit to the model structures in the top 300 m, but grossly overestimates the sign reversal at depth. The addition of mean flow slightly improves the latter issue, but is too surface intensified. SLT plus topography rectifies the overestimation of the sign reversal, but overestimates the amplitude of the structure for much of the layer above the sign reversal. Combining the effects of mean flow and topography provided the best fit for the mean model profiles, although small errors at the surface and mid-depths are carried over from the individual effects of mean flow and topography respectively. Across the section the best fitting theory varies between SLT plus topography and topography with mean flow, with, in general, SLT plus topography performing better in the east where the sign reversal is less pronounced. None of the theories could accurately reproduce the deeper sign reversals in the west. All theories performed badly at the boundaries. The generalization of this method to other latitudes, oceans, models and baroclinic modes would provide greater insight into the variability in the ocean, while better observational data would allow verification of the model findings.
Resumo:
Semi-analytical expressions for the momentum flux associated with orographic internal gravity waves, and closed analytical expressions for its divergence, are derived for inviscid, stationary, hydrostatic, directionally-sheared flow over mountains with an elliptical horizontal cross-section. These calculations, obtained using linear theory conjugated with a third-order WKB approximation, are valid for relatively slowly-varying, but otherwise generic wind profiles, and given in a form that is straightforward to implement in drag parametrization schemes. When normalized by the surface drag in the absence of shear, a quantity that is calculated routinely in existing drag parametrizations, the momentum flux becomes independent of the detailed shape of the orography. Unlike linear theory in the Ri → ∞ limit, the present calculations account for shear-induced amplification or reduction of the surface drag, and partial absorption of the wave momentum flux at critical levels. Profiles of the normalized momentum fluxes obtained using this model and a linear numerical model without the WKB approximation are evaluated and compared for two idealized wind profiles with directional shear, for different Richardson numbers (Ri). Agreement is found to be excellent for the first wind profile (where one of the wind components varies linearly) down to Ri = 0.5, while not so satisfactory, but still showing a large improvement relative to the Ri → ∞ limit, for the second wind profile (where the wind turns with height at a constant rate keeping a constant magnitude). These results are complementary, in the Ri > O(1) parameter range, to Broad’s generalization of the Eliassen–Palm theorem to 3D flow. They should contribute to improve drag parametrizations used in global weather and climate prediction models.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.
Resumo:
The Lincoln–Petersen estimator is one of the most popular estimators used in capture–recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11, f10, f01, f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln–Petersen estimator provides an estimate for f00. In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln–Petersen’s, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln–Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey.
Resumo:
This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.
Resumo:
Dynamic electricity pricing can produce efficiency gains in the electricity sector and help achieve energy policy goals such as increasing electric system reliability and supporting renewable energy deployment. Retail electric companies can offer dynamic pricing to residential electricity customers via smart meter-enabled tariffs that proxy the cost to procure electricity on the wholesale market. Current investments in the smart metering necessary to implement dynamic tariffs show policy makers’ resolve for enabling responsive demand and realizing its benefits. However, despite these benefits and the potential bill savings these tariffs can offer, adoption among residential customers remains at low levels. Using a choice experiment approach, this paper seeks to determine whether disclosing the environmental and system benefits of dynamic tariffs to residential customers can increase adoption. Although sampling and design issues preclude wide generalization, we found that our environmentally conscious respondents reduced their required discount to switch to dynamic tariffs around 10% in response to higher awareness of environmental and system benefits. The perception that shifting usage is easy to do also had a significant impact, indicating the potential importance of enabling technology. Perhaps the targeted communication strategy employed by this study is one way to increase adoption and achieve policy goals.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.