846 resultados para Place image art-making
Resumo:
The dissertation examines the role of the EU courts in new governance. New governance has raised unprecedented interest in the EU in recent years. This is manifested in a plethora of instruments and actors at various levels that challenge more traditional forms of command-and-control regulation. New governance and political experimentation more generally is thought to sap the ability of the EU judiciary to monitor and review these experiments. The exclusion of the courts is then seen to add to the legitimacy problem of new governance. The starting point of this dissertation is the observation that the marginalised role of the courts is based on theoretical and empirical assumptions which invite scrutiny. The theoretical framework of the dissertation is deliberative democracy and democratic experimentalism. The analysis of deliberative democracy is sustained by an attempt to apply theoretical concepts to three distinctive examples of governance in the EU. These are the EU Sustainable Development Strategy, the European Chemicals Agency, and the Common Implementation Strategy for the Water Framework Directive. The case studies show numerous disincentives and barriers to judicial review. Among these are questions of the role of courts in shaping governance frameworks, the reviewability of science-based measures, the standing of individuals before the courts, and the justiciability of soft law. The dissertation analyses the conditions of judicial review in each governance environment and proposes improvements. From a more theoretical standpoint it could be said that each case study presents a governance regime which builds on legislation that lays out major (guide)lines but leaves details to be filled out at a later stage. Specification of detailed standards takes place through collaborative networks comprising members from national administrations, NGOs, and the Commission. Viewed this way, deliberative problem-solving is needed to bring people together to clarify, elaborate, and revise largely abstract and general norms in order to resolve concrete and specific problems and to make law applicable and enforceable. The dissertation draws attention to the potential of peer review included there and its profound consequences for judicial accountability structures. It is argued that without this kind of ongoing and dynamic peer review of accountability in governance frameworks, judicial review of new governance is difficult and in some cases impossible. This claim has implications for how we understand the concept of soft law, the role of the courts, participation rights, and the legitimacy of governance measures more generally. The experimentalist architecture of judicial decision-making relies upon a wide variety of actors to provide conditions for legitimate and efficient review.
Resumo:
The dissertation examines aspects of asymmetrical warfare in the war-making of the German military entrepreneur Ernst von Mansfeld during his involvement in the Thirty Years War. Due to the nature of the inquiry, which combines history with military-political theory, the methodological approach of the dissertation is interdisciplinary. The theoretical framework used is that of asymmetrical warfare. The primary sources used in the dissertation are mostly political pamphlets and newsletters. Other sources include letters, documents, and contemporaneous chronicles. The secondary sources are divided into two categories, literature on the history of the Thirty Years War and textbooks covering the theory of asymmetrical warfare. The first category includes biographical works on Ernst von Mansfeld, as well as general histories of the Thirty Years War and seventeenth-century warfare. The second category combines military theory and political science. The structure of the dissertation consists of eight lead chapters, including an introduction and conclusion. The introduction covers the theoretical approach and aims of the dissertation, and provides a brief overlook of the sources and previous research on Ernst von Mansfeld and asymmetrical warfare in the Thirty Years War. The second chapter covers aspects of Mansfeld s asymmetrical warfare from the perspective of operational art. The third chapter investigates the illegal and immoral aspects of Mansfeld s war-making. The fourth chapter compares the differing methods by which Mansfeld and his enemies raised and financed their armies. The fifth chapter investigates Mansfeld s involvement in indirect warfare. The sixth chapter presents Mansfeld as an object and an agent of image and information war. The seventh chapter looks into the counter-reactions, which Mansfeld s asymmetrical warfare provoked from his enemies. The eighth chapter offers a conclusion of the findings. The dissertation argues that asymmetrical warfare presented itself in all the aforementioned areas of Mansfeld s conduct during the Thirty Years War. The operational asymmetry arose from the freedom of movement that Mansfeld enjoyed, while his enemies were constrained by the limits of positional warfare. As a non-state operator Mansfeld was also free to flout the rules of seventeenth-century warfare, which his enemies could not do with equal ease. The raising and financing of military forces was another source of asymmetry, because the nature of early seventeenth-century warfare favoured private military entrepreneurs rather than embryonic fiscal-military states. The dissertation also argues that other powers fought their own asymmetrical and indirect wars against the Habsburgs through Mansfeld s agency. Image and information were asymmetrical weapons, which were both aimed against Mansfeld and utilized by him. Finally, Mansfeld s asymmetrical threat forced the Habsburgs to adapt to his methods, which ultimately lead to the formation of a subcontracted Imperial Army under the management and leadership of Albrecht von Wallenstein. Therefore Mansfeld s asymmetrical warfare ultimately paved way for the kind of state-monopolized, organised, and symmetrical warfare that has prevailed from 1648 onwards. The conclusion is that Mansfeld s conduct in the Thirty Years War matched the criteria for asymmetrical warfare. While traditional historiography treated Mansfeld as an anomaly in the age of European state formation, his asymmetrical warfare has begun to bear resemblance to the contemporary conflicts, where nation states no longer hold the monopoly of violence.
Resumo:
Right as an Argument. Leo Mechelin and the Finnish Question 1886-1912 At the turn of the 20th century the Finnish Question rose up as a political and juridical issue at the international arena. The vaguely précised position of Finland in the Russian empire led to diverse conclusions concerning the correctness of the February manifesto of 1899. It was predominantly among a European elite of politicians, cultural workers and academics the issue rose some interest. Finns were active making propaganda for their cause, and they put an emphasis on the claim that the right was on the Finnish side. In the study Elisabeth Stubb compare the Finnish, Russian and European statements about the Finnish Question and analyse their use of right as an argument. The Finnish Question offers at the same time a case study of a national entity which possesses a political sphere of life but is not fully independent, and its possibilities to drive its interests in an international context. Leo Mechelin (1839-1914), the leader of the Finnish propaganda organization abroad, is used as a point of departure. The biographical stance is formed into a triangle, where Leo Mechelin, the idea of right and the Finnish Question abroad are the three cornerstones. The treatment of one cornerstone sheds a ligth on the two others. The metaphor of triangulation also worked as a method to reach "a third stance" in a scinetific and political issue that usually is polarised into two opposite alternatives. An adherence to a strict legal right could not in the end offer a complete, unquestionable and satisfactory solution to the Finnsih Question, it was dependent on "the right of state wisdom and sound insight". The Finnish propaganda abroad used almost completely alternative ways of making politics. The propaganda did not have a decisive effect on countries' official politics, but gained unofficial support, especially in the public opinion and in academic statements. Mechelin claimed that the political field was dependent on public opinion and scientific research. Together with the official politics these two fields formed a triangle that shared the task of balancing the political arena and preventing it from making unwise decisions of taking an unjust turn. The international sphere worked as a balancing part in the Finnish Question. Mechelin tried by claiming the status of state for Finland's part to secure the country a place at the official international arena. At the same time, and especially when the claim was not fully adopted, he emphasised, and in a European context worked for, that right would become the guiding light not only for international relations, but also for the policy making in the inner life of the state.
Resumo:
Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]
Resumo:
We have benchmarked the maximum obtainable recognition accuracy on five publicly available standard word image data sets using semi-automated segmentation and a commercial OCR. These images have been cropped from camera captured scene images, born digital images (BDI) and street view images. Using the Matlab based tool developed by us, we have annotated at the pixel level more than 3600 word images from the five data sets. The word images binarized by the tool, as well as by our own midline analysis and propagation of segmentation (MAPS) algorithm are recognized using the trial version of Nuance Omnipage OCR and these two results are compared with the best reported in the literature. The benchmark word recognition rates obtained on ICDAR 2003, Sign evaluation, Street view, Born-digital and ICDAR 2011 data sets are 83.9%, 89.3%, 79.6%, 88.5% and 86.7%, respectively. The results obtained from MAPS binarized word images without the use of any lexicon are 64.5% and 71.7% for ICDAR 2003 and 2011 respectively, and these values are higher than the best reported values in the literature of 61.1% and 41.2%, respectively. MAPS results of 82.8% for BDI 2011 dataset matches the performance of the state of the art method based on power law transform.
Resumo:
Magnetic Resonance Imaging (MRI) has been widely used in cancer treatment planning, which takes the advantage of high-resolution and high-contrast provided by it. The raw data collected in the MRI can also be used to obtain the temperature maps and has been explored for performing MR thermometry. This review article describes the methods that are used in performing MR thermometry, with an emphasis on reconstruction methods that are useful to obtain these temperature maps in real-time for large region of interest. This article also proposes a prior-image constrained reconstruction method for temperature reconstruction in MR thermometry, and a systematic comparison using ex-vivo tissue experiments with state of the art reconstruction method is presented.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Resumo:
An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).
Resumo:
In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
We address the problem of denoising images corrupted by multiplicative noise. The noise is assumed to follow a Gamma distribution. Compared with additive noise distortion, the effect of multiplicative noise on the visual quality of images is quite severe. We consider the mean-square error (MSE) cost function and derive an expression for an unbiased estimate of the MSE. The resulting multiplicative noise unbiased risk estimator is referred to as MURE. The denoising operation is performed in the wavelet domain by considering the image-domain MURE. The parameters of the denoising function (typically, a shrinkage of wavelet coefficients) are optimized for by minimizing MURE. We show that MURE is accurate and close to the oracle MSE. This makes MURE-based image denoising reliable and on par with oracle-MSE-based estimates. Analogous to the other popular risk estimation approaches developed for additive, Poisson, and chi-squared noise degradations, the proposed approach does not assume any prior on the underlying noise-free image. We report denoising results for various noise levels and show that the quality of denoising obtained is on par with the oracle result and better than that obtained using some state-of-the-art denoisers.
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.