389 resultados para Cambridge University


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Asylum is being gradually denuded of the national institutional mechanisms (judicial, legislative and administrative) that provide the framework for a fair and effective asylum hearing. In this sense, there is an ongoing ‘denationalization’ or ‘deformalization’ of the asylum process. This chapter critically examines one of the linchpins of this trend: the erection of pre-entry measures at ports of embarkation in order to prevent asylum seekers from physically accessing the territory of the state. Pre-entry measures comprise the core requirement that foreigners possess an entry visa granting permission to enter the state of destination. Visa requirements are increasingly implemented by immigration officials posted abroad or by officials of transit countries pursuant to bilateral agreements (so-called ‘juxtaposed’ immigration controls). Private carriers, which are subject to sanctions if they bring persons to a country who do not have permission to enter, also engage in a form of de facto immigration control on behalf of states. These measures constitute a type of ‘externalized’ or ‘exported’ border that pushes the immigration boundaries of the state as far from its physical boundaries as possible. Pre-entry measures have a crippling impact on the ability of asylum seekers to access the territory of states to claim asylum. In effect, states have ‘externalized’ asylum by replacing the legal obligation on states to protect refugees arriving at ports of entry with what are perceived to be no more than moral obligations towards asylum seekers arriving at the external border of the state.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We treat two related moving boundary problems. The first is the ill-posed Stefan problem for melting a superheated solid in one Cartesian coordinate. Mathematically, this is the same problem as that for freezing a supercooled liquid, with applications to crystal growth. By applying a front-fixing technique with finite differences, we reproduce existing numerical results in the literature, concentrating on solutions that break down in finite time. This sort of finite-time blow-up is characterised by the speed of the moving boundary becoming unbounded in the blow-up limit. The second problem, which is an extension of the first, is proposed to simulate aspects of a particular two-phase Stefan problem with surface tension. We study this novel moving boundary problem numerically, and provide results that support the hypothesis that it exhibits a similar type of finite-time blow-up as the more complicated two-phase problem. The results are unusual in the sense that it appears the addition of surface tension transforms a well-posed problem into an ill-posed one.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.