35 resultados para interlinguistic terminological equivalence
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
The need for consistent assimilation of satellite measurements for numerical weather prediction led operational meteorological centers to assimilate satellite radiances directly using variational data assimilation systems. More recently there has been a renewed interest in assimilating satellite retrievals (e.g., to avoid the use of relatively complicated radiative transfer models as observation operators for data assimilation). The aim of this paper is to provide a rigorous and comprehensive discussion of the conditions for the equivalence between radiance and retrieval assimilation. It is shown that two requirements need to be satisfied for the equivalence: (i) the radiance observation operator needs to be approximately linear in a region of the state space centered at the retrieval and with a radius of the order of the retrieval error; and (ii) any prior information used to constrain the retrieval should not underrepresent the variability of the state, so as to retain the information content of the measurements. Both these requirements can be tested in practice. When these requirements are met, retrievals can be transformed so as to represent only the portion of the state that is well constrained by the original radiance measurements and can be assimilated in a consistent and optimal way, by means of an appropriate observation operator and a unit matrix as error covariance. Finally, specific cases when retrieval assimilation can be more advantageous (e.g., when the estimate sought by the operational assimilation system depends on the first guess) are discussed.
Resumo:
We prove the equivalence of three weak formulations of the steady water waves equations, namely: the velocity formulation, the stream function formulation and the Dubreil-Jacotin formulation, under weak Hölder regularity assumptions on their solutions.
Resumo:
Two sources of bias arise in conventional loss predictions in the wake of natural disasters. One source of bias stems from neglect of accounting for animal genetic resource loss. A second source of bias stems from failure to identify, in addition to the direct effects of such loss, the indirect effects arising from implications impacting animal-human interactions. We argue that, in some contexts, the magnitude of bias imputed by neglecting animal genetic resource stocks is substantial. We show, in addition, and contrary to popular belief, that the biases attributable to losses in distinct genetic resource stocks are very likely to be the same. We derive the formal equivalence across the distinct resource stocks by deriving an envelope result in a model that forms the mainstay of enquiry in subsistence farming and we validate the theory, empirically, in a World-Society-for-the-Protection-of-Animals application
Resumo:
In this brief note we prove orbifold equivalence between two potentials described by strangely dual exceptional unimodular singularities of type K14 and Q10 in two different ways. The matrix factorizations proving the orbifold equivalence give rise to equations whose solutions are permuted by Galois groups which differ for different expressions of the same singularity.
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s
Resumo:
Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973) and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974) for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function (the dilute CAPE), the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterization that use the non-entraining parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973) ansatz must be invoked as a necessary ingredient of those closures.
Resumo:
The double triangular test was introduced twenty years ago, and the purpose of this paper is to review applications that have been made since then. In fact, take-up of the method was rather slow until the late 1990s, but in recent years several clinical trial reports have been published describing its use in a wide range of therapeutic areas. The core of this paper is a detailed account of five trials that have been published since 2000 in which the method was applied to studies of pancreatic cancer, breast cancer, myocardial infarction, epilepsy and bedsores. Before those accounts are given, the method is described and the history behind its evolution is presented. The future potential of the method for sequential case-control and equivalence trials is also discussed. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
During fatigue tests of cortical bone specimens, at the unload portion of the cycle (zero stress) non-zero strains occur and progressively accumulate as the test progresses. This non-zero strain is hypothesised to be mostly, if not entirely, describable as creep. This work examines the rate of accumulation of this strain and quantifies its stress dependency. A published relationship determined from creep tests of cortical bone (Journal of Biomechanics 21 (1988) 623) is combined with knowledge of the stress history during fatigue testing to derive an expression for the amount of creep strain in fatigue tests. Fatigue tests on 31 bone samples from four individuals showed strong correlations between creep strain rate and both stress and “normalised stress” (σ/E) during tensile fatigue testing (0–T). Combined results were good (r2=0.78) and differences between the various individuals, in particular, vanished when effects were examined against normalised stress values. Constants of the regression showed equivalence to constants derived in creep tests. The universality of the results, with respect to four different individuals of both sexes, shows great promise for use in computational models of fatigue in bone structures.
Resumo:
The research uses a sociological perspective to build an improved, context specific understanding of innovation diffusion within the UK construction industry. It is argued there is an iterative interplay between actors and the social system they occupy that directly influences the diffusion process as well as the methodology adopted. The research builds upon previous findings that argued a level of best fit for the three innovation diffusion concepts of cohesion, structural equivalence and thresholds. That level of best fit is analysed here using empirical data from the UK construction industry. This analysis allows an understanding of how the relative importance of these concepts' actually varies within the stages of the innovation diffusion process. The conclusion that the level of relevance fluctuates in relation to the stages of the diffusion process is a new development in the field.
Resumo:
The UK Construction Industry has been criticized for being slow to change and adopt innovations. The idiosyncrasies of participants, their roles in a social system and the contextual differences between sections of the UK Construction Industry are viewed as being paramount to explaining innovation diffusion within this context. Three innovation diffusion theories from outside construction management literature are introduced, Cohesion, Structural Equivalence and Thresholds. The relevance of each theory, in relation to the UK Construction Industry, is critically reviewed using literature and empirical data. Analysis of the data results in an explanatory framework being proposed. The framework introduces a Personal Awareness Threshold concept, highlights the dominant role of Cohesion through the main stages of diffusion, together with the use of Structural Equivalence during the later stages of diffusion and the importance of Adoption Threshold levels.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
External interferences can severely degrade the performance of an Over-the-horizon radar (OTHR), so suppression of external interferences in strong clutter environment is the prerequisite for the target detection. The traditional suppression solutions usually began with clutter suppression in either time or frequency domain, followed by the interference detection and suppression. Based on this traditional solution, this paper proposes a method characterized by joint clutter suppression and interference detection: by analyzing eigenvalues in a short-time moving window centered at different time position, Clutter is suppressed by discarding the maximum three eigenvalues at every time position and meanwhile detection is achieved by analyzing the remained eigenvalues at different position. Then, restoration is achieved by forward-backward linear prediction using interference-free data surrounding the interference position. In the numeric computation, the eigenvalue decomposition (EVD) is replaced by values decomposition (SVD) based on the equivalence of these two processing. Data processing and experimental results show its efficiency of noise floor falling down about 10-20 dB.