18 resultados para Unified operations (Military science)
Resumo:
This paper presents a multimodal analysis of online self-representations of the Elite Squad of the military police of Rio de Janeiro, the Special Police Operations Battalion BOPE. The analysis is placed within the wider context of a “new military urbanism”, which is evidenced in the ongoing “Pacification” of many of the city’s favelas, in which BOPE plays an active interventionist as well as a symbolic role, and is a kind of solution which clearly fails to address the root causes of violence which lie in poverty and social inequality. The paper first provides a sociocultural account of BOPE’s role in Rio’s public security and then looks at some of the mainly visual mediated discourses the Squad employs in constructing a public image of itself as a modern and efficient, yet at the same time “magical” police force.
Resumo:
This is an analysis of the case law of the European Court of Human Rights on the obligation on States to plan and control the use of potentially lethal force by their police and military personnel. It illustrates the Court's attachment to the strict or careful scrutiny test and suggests how the Court might want to develop its jurisprudence in the future.
Resumo:
We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.