48 resultados para asymptotic suboptimality
Resumo:
The relative potency of common toughening mechanisms is explored for layered solids and particulate solids, with an emphasis on crack multiplication and plasticity. First, the enhancement in toughness due to a parallel array of cracks in an elastic solid is explored, and the stability of co-operative cracking is quantified. Second, the degree of synergistic toughening is determined for combined crack penetration and crack kinking at the tip of a macroscopic, mode I crack; specifically, the asymptotic problem of self-similar crack advance (penetration mode) versus 90 ° symmetric kinking is considered for an isotropic, homogeneous solid with weak interfaces. Each interface is treated as a cohesive zone of finite strength and toughness. Third, the degree of toughening associated with crack multiplication is assessed for a particulate solid comprising isotropic elastic grains of hexagonal shape, bonded by cohesive zones of finite strength and toughness. The study concludes with the prediction of R-curves for a mode I crack in a multi-layer stack of elastic and elastic-plastic solids. A detailed comparison of the potency of the above mechanisms and their practical application are given. In broad terms, crack tip kinking can be highly potent, whereas multiple cracking is difficult to activate under quasi-static conditions. Plastic dissipation can give a significant toughening in multi-layers especially at the nanoscale. © 2013 Springer Science+Business Media Dordrecht.
Resumo:
This paper considers channel coding for the memoryless multiple-access channel with a given (possibly suboptimal) decoding rule. Non-asymptotic bounds on the error probability are given, and a cost-constrained random-coding ensemble is used to obtain an achievable error exponent. The achievable rate region recovered by the error exponent coincides with that of Lapidoth in the discrete memoryless case, and remains valid for more general alphabets. © 2013 IEEE.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.