26 resultados para Dirichlet-multinomial
Resumo:
A Cauchy problem for general elliptic second-order linear partial differential equations in which the Dirichlet data in H½(?1 ? ?3) is assumed available on a larger part of the boundary ? of the bounded domain O than the boundary portion ?1 on which the Neumann data is prescribed, is investigated using a conjugate gradient method. We obtain an approximation to the solution of the Cauchy problem by minimizing a certain discrete functional and interpolating using the finite diference or boundary element method. The minimization involves solving equations obtained by discretising mixed boundary value problems for the same operator and its adjoint. It is proved that the solution of the discretised optimization problem converges to the continuous one, as the mesh size tends to zero. Numerical results are presented and discussed.
Resumo:
We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.
Resumo:
In this paper, we consider analytical and numerical solutions to the Dirichlet boundary-value problem for the biharmonic partial differential equation on a disc of finite radius in the plane. The physical interpretation of these solutions is that of the harmonic oscillations of a thin, clamped plate. For the linear, fourth-order, biharmonic partial differential equation in the plane, it is well known that the solution method of separation in polar coordinates is not possible, in general. However, in this paper, for circular domains in the plane, it is shown that a method, here called quasi-separation of variables, does lead to solutions of the partial differential equation. These solutions are products of solutions of two ordinary linear differential equations: a fourth-order radial equation and a second-order angular differential equation. To be expected, without complete separation of the polar variables, there is some restriction on the range of these solutions in comparison with the corresponding separated solutions of the second-order harmonic differential equation in the plane. Notwithstanding these restrictions, the quasi-separation method leads to solutions of the Dirichlet boundary-value problem on a disc with centre at the origin, with boundary conditions determined by the solution and its inward drawn normal taking the value 0 on the edge of the disc. One significant feature for these biharmonic boundary-value problems, in general, follows from the form of the biharmonic differential expression when represented in polar coordinates. In this form, the differential expression has a singularity at the origin, in the radial variable. This singularity translates to a singularity at the origin of the fourth-order radial separated equation; this singularity necessitates the application of a third boundary condition in order to determine a self-adjoint solution to the Dirichlet boundary-value problem. The penultimate section of the paper reports on numerical solutions to the Dirichlet boundary-value problem; these results are also presented graphically. Two specific cases are studied in detail and numerical values of the eigenvalues are compared with the results obtained in earlier studies.
Resumo:
We investigate a mixed problem with variable lateral conditions for the heat equation that arises in modelling exocytosis, i.e. the opening of a cell boundary in specific biological species for the release of certain molecules to the exterior of the cell. The Dirichlet condition is imposed on a surface patch of the boundary and this patch is occupying a larger part of the boundary as time increases modelling where the cell is opening (the fusion pore), and on the remaining part, a zero Neumann condition is imposed (no molecules can cross this boundary). Uniform concentration is assumed at the initial time. We introduce a weak formulation of this problem and show that there is a unique weak solution. Moreover, we give an asymptotic expansion for the behaviour of the solution near the opening point and for small values in time. We also give an integral equation for the numerical construction of the leading term in this expansion.
Resumo:
We consider a Cauchy problem for the Laplace equation in a bounded region containing a cut, where the region is formed by removing a sufficiently smooth arc (the cut) from a bounded simply connected domain D. The aim is to reconstruct the solution on the cut from the values of the solution and its normal derivative on the boundary of the domain D. We propose an alternating iterative method which involves solving direct mixed problems for the Laplace operator in the same region. These mixed problems have either a Dirichlet or a Neumann boundary condition imposed on the cut and are solved by a potential approach. Each of these mixed problems is reduced to a system of integral equations of the first kind with logarithmic and hypersingular kernels and at most a square root singularity in the densities at the endpoints of the cut. The full discretization of the direct problems is realized by a trigonometric quadrature method which has super-algebraic convergence. The numerical examples presented illustrate the feasibility of the proposed method.
Resumo:
Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA. © 2014 Association for Computational Linguistics.
Resumo:
Improving the performance of private sector small and medium sized enterprises (SMEs) in a cost effective manner is a major concern for government. Governments have saved costs by moving information online rather than through more expensive face-to-face exchanges between advisers and clients. Building on previous work that distinguished between types of advice, this article evaluates whether these changes to delivery mechanisms affect the type of advice received. Using a multinomial logit model of 1334 cases of business advice to small firms collected in England, the study found that advice to improve capabilities was taken by smaller firms who were less likely to have limited liability or undertake business planning. SMEs sought word-of-mouth referrals before taking internal, capability-enhancing advice. This is also the case when that advice was part of a wider package of assistance involving both internal and external aspects. Only when firms took advice that used extant capabilities did they rely on the Internet. Therefore, when the Internet is privileged over face-to-face advice the changes made by each recipient of advice are likely to diminish causing less impact from advice within the economy. It implies that fewer firms will adopt the sorts of management practices that would improve their productivity. © 2014 Taylor & Francis.
Resumo:
Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
The contribution of this thesis is in understanding the origins in developing countries of differences in labour wage and household consumption vis-à-vis educational abilities (and by extension employment statuses). This thesis adds to the labour market literature in developing countries by investigating the nature of employment and its consequences for labour wage and household consumption in a developing country. It utilizes multinomial probit, blinder-oaxaca, Heckman and quantile regressions to examine one human capital indicator: educational attainment; and two welfare proxies: labour wage and household consumption, in a developing country, Nigeria. It finds that, empirically, the self-employed are a heterogeneous group of individuals made up of a few highly educated individuals, and a significant majority of ‘not so educated’ individuals who mostly earn less than paid workers. It also finds that a significant number of employers enjoy labour wage premiums; and having a higher proportion of employers in the household has a positive relationship with household consumption. The thesis furthermore discovers an upper educational threshold for women employers not found for men. Interestingly, the thesis also finds that there is indeed an ordering of labour wages into low-income self-employment (which seems to be found mainly in “own account” self-employment), medium-income paid employment, and high-income self-employment (which seems to be found mainly among employers), and that this corresponds to a similar ordering of low human capital, medium human capital and high human capital among labour market participants, as expressed through educational attainments. These show that as a whole, employers can largely be classed as experiencing pulled self-employment, as they appear to be advantaged in all three criteria (educational attainments, labour wage and household consumption). A minority of self-employed “own account” workers (specifically those at the upper end of the income distribution who are well educated), can also be classed as experiencing pulled self-employment. The rest of the significant majority of self-employed “own account” workers in this study can be classed as experiencing pushed self-employment in terms of the indicators used.
Resumo:
In this paper, the start-up process is split conceptually into four stages: considering entrepreneurship, intending to start a new business in the next 3 years, nascent entrepreneurship and owning-managing a newly established business. We investigate the determinants of all of these jointly, using a multinomial logit model; it allows for the effects of resources and capabilities to vary across these stages. We employ the Global Entrepreneurship Monitor database for the years 2006–2009, containing 8269 usable observations from respondents drawn from the Lower Layer Super Output Areas in the East Midlands (UK) so that individual observations are linked to space. Our results show that the role of education, experience, and availability of ‘entrepreneurial capital’ in the local neighbourhood varies along the different stages of the entrepreneurial process. In the early stages, the negative (opportunity cost) effect of resources endowment dominates, yet it tends to reverse in the advanced stages, where the positive effect of resources becomes stronger.
Resumo:
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.