896 resultados para Objects in art
Resumo:
For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of ail edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however, we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM.
Resumo:
High-speed evaluation of a large number of linear, quadratic, and cubic expressions is very important for the modeling and real-time display of objects in computer graphics. Using VLSI techniques, chips called pixel planes have actually been built by H. Fuchs and his group to evaluate linear expressions. In this paper, we describe a topological variant of Fuchs' pixel planes which can evaluate linear, quadratic, cubic, and higher-order polynomials. In our design, we make use of local interconnections only, i.e., interconnections between neighboring processing cells. This leads to the concept of tiling the processing cells for VLSI implementation.
Tiedostumaton nykytaiteessa : Katse, ääni ja aika vuosituhannen taitteen suomalaisessa nykytaiteessa
Resumo:
Leevi Haapala explores moving image works, sculptures and installations from a psychoanalytic perspective in his study The Unconscious in Contemporary Art. The Gaze, Voice and Time in Finnish Contemporary Art at the Turn of the Millennium . The artists included in the study are Eija-Liisa Ahtila, Hans-Christian Berg, Markus Copper, Liisa Lounila and Salla Tykkä. The theoretical framework includes different psychoanalytic readings of the concepts of the gaze, voice and temporality. The installations are based on spatiality and temporality, and their detailed reading emphasizes the medium-specific features of the works as well as their fragmentary nature, heterogeneity and affectivity. The study is cross-disciplinary in that it connects perspectives from the visual culture, new art history and theory to the interpretation of contemporary art. The most important concepts from psychoanalysis, affect theory and trauma discourse used in the study include affect, object a (objet petit a) as articulated by Jacques Lacan, Sigmund Freud s uncanny (das Unheimliche) and trauma. Das Unheimliche has been translated as uncanny in art history under the influence of Rosalind Krauss. The object of the study, the unconscious in contemporary art, is approached through these concepts. The study focuses on Lacan s additions to the list of partial drives: the gaze and voice as scopic and invocative drives and their interpretations in the studies of the moving image. The texts by the American film theorist and art historian Kaja Silverman are in crucial role. The study locates contemporary art as part of trauma culture, which has a tendency to define individual and historical experiences through trauma. Some of the art works point towards trauma, which may appear as a theoretic or fictitious construction. The study presents a comprehensive collection of different kinds of trauma discourse in the field of art research through the texts of Hal Foster, Cathy Caruth, Ruth Leys and Shoshana Felman. The study connects trauma theory with the theoretical analysis of the interference and discontinuity of the moving image in the readings by Susan Buck-Morss, Mary Ann Doane and Peter Osborn among others. The analysis emphasizes different ways of seeing and multisensoriality in the reception of contemporary art. With their reflections and inverse projections, the surprising mechanisms of Hans-Christian Berg s sculptures are connected with Lacan s views on the early mirroring and imitation attempts of the individual s body image. Salla Tykkä s film trilogy Cave invites one to contemplate the Lacanian theory of the gaze in relation to the experiences of being seen. The three oceanic sculpture installations by Markus Copper are studied through the vocality they create, often through an aggressive way of acting, as well as from the point of view of the functioning of an invocative drive. The study compares the work of fiction and Freud s texts on paranoia and psychosis to Eija-Liisa Ahtila s manuscripts and moving image installations about the same topic. The cinematic time in Liisa Lounila s time-slice video installations is approached through the theoretical study of the unconscious temporal structure. The viewer of the moving image is inside the work in an in-between state: in a space produced by the contents of the work and its technology. The installations of the moving image enable us to inhabit different kinds of virtual bodies or spaces, which do not correspond with our everyday experiences. Nevertheless, the works of art often try to deconstruct the identification to what has been shown on screen. This way, the viewer s attention can be fixed on his own unconscious experiences in parallel with the work s deconstructed nature as representation. The study shows that contemporary art is a central cultural practice, which allows us to discuss the unconscious in a meaningful way. The study suggests that the agency that is discursively diffuse and consists of several different praxes should be called the unconscious. The emergence of the unconscious can happen in two areas: in contemporary art through different senses and discursive elements, and in the study of contemporary art, which, being a linguistic activity is sensitive to the movements of the unconscious. One of the missions of art research is to build different kinds of articulated constructs and to open an interpretative space for the nature of art as an event.
Resumo:
Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.
Resumo:
Conventional encryption techniques are usually applicable for text data and often unsuited for encrypting multimedia objects for two reasons. Firstly, the huge sizes associated with multimedia objects make conventional encryption computationally costly. Secondly, multimedia objects come with massive redundancies which are useful in avoiding encryption of the objects in their entirety. Hence a class of encryption techniques devoted to encrypting multimedia objects like images have been developed. These techniques make use of the fact that the data comprising multimedia objects like images could in general be seggregated into two disjoint components, namely salient and non-salient. While the former component contributes to the perceptual quality of the object, the latter only adds minor details to it. In the context of images, the salient component is often much smaller in size than the non-salient component. Encryption effort is considerably reduced if only the salient component is encrypted while leaving the other component unencrypted. A key challenge is to find means to achieve a desirable seggregation so that the unencrypted component does not reveal any information about the object itself. In this study, an image encryption approach that uses fractal structures known as space-filling curves- in order to reduce the encryption overload is presented. In addition, the approach also enables a high quality lossy compression of images.
Resumo:
We present an algorithm for tracking objects in a video sequence, based on a novel approach for motion detection. We do not estimate the velocity �eld. In-stead we detect only the direction of motion at edge points and thus isolate sets of points which are moving coherently. We use a Hausdor� distance based matching algorithm to match point sets in local neighborhood and thus track objects in a video sequence. We show through some examples the e�ectiveness of the algo- rithm.
Resumo:
Our everyday visual experience frequently involves searching for objects in clutter. Why are some searches easy and others hard? It is generally believed that the time taken to find a target increases as it becomes similar to its surrounding distractors. Here, I show that while this is qualitatively true, the exact relationship is in fact not linear. In a simple search experiment, when subjects searched for a bar differing in orientation from its distractors, search time was inversely proportional to the angular difference in orientation. Thus, rather than taking search reaction time (RT) to be a measure of target-distractor similarity, we can literally turn search time on its head (i.e. take its reciprocal 1/RT) to obtain a measure of search dissimilarity that varies linearly over a large range of target-distractor differences. I show that this dissimilarity measure has the properties of a distance metric, and report two interesting insights come from this measure: First, for a large number of searches, search asymmetries are relatively rare and when they do occur, differ by a fixed distance. Second, search distances can be used to elucidate object representations that underlie search - for example, these representations are roughly invariant to three-dimensional view. Finally, search distance has a straightforward interpretation in the context of accumulator models of search, where it is proportional to the discriminative signal that is integrated to produce a response. This is consistent with recent studies that have linked this distance to neuronal discriminability in visual cortex. Thus, while search time remains the more direct measure of visual search, its reciprocal also has the potential for interesting and novel insights. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Most Java programmers would agree that Java is a language that promotes a philosophy of “create and go forth”. By design, temporary objects are meant to be created on the heap, possibly used and then abandoned to be collected by the garbage collector. Excessive generation of temporary objects is termed “object churn” and is a form of software bloat that often leads to performance and memory problems. To mitigate this problem, many compiler optimizations aim at identifying objects that may be allocated on the stack. However, most such optimizations miss large opportunities for memory reuse when dealing with objects inside loops or when dealing with container objects. In this paper, we describe a novel algorithm that detects bloat caused by the creation of temporary container and String objects within a loop. Our analysis determines which objects created within a loop can be reused. Then we describe a source-to-source transformation that efficiently reuses such objects. Empirical evaluation indicates that our solution can reduce upto 40% of temporary object allocations in large programs, resulting in a performance improvement that can be as high as a 20% reduction in the run time, specifically when a program has a high churn rate or when the program is memory intensive and needs to run the GC often.
Resumo:
Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001
Resumo:
The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.
Resumo:
We reinterpret and generalize conjectures of Lam and Williams as statements about the stationary distribution of a multispecies exclusion process on the ring. The central objects in our study are the multiline queues of Ferrari and Martin. We make some progress on some of the conjectures in different directions. First, we prove Lam and Williams' conjectures in two special cases by generalizing the rates of the Ferrari-Martin transitions. Secondly, we define a new process on multiline queues, which have a certain minimality property. This gives another proof for one of the special cases; namely arbitrary jump rates for three species. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.
Resumo:
Illumination plays an important role in optical microscopy. Kohler illumination, introduced more than a century ago, has been the backbone of optical microscopes. The last few decades have seen the evolution of new illumination techniques meant to improve certain imaging capabilities of the microscope. Most of them are, however, not amenable for wide-field observation and hence have restricted use in microscopy applications such as cell biology and microscale profile measurements. The method of structured illumination microscopy has been developed as a wide-field technique for achieving higher performance. Additionally, it is also compatible with existing microscopes. This method consists of modifying the illumination by superposing a well-defined pattern on either the sample itself or its image. Computational techniques are applied on the resultant images to remove the effect of the structure and to obtain the desired performance enhancement. This method has evolved over the last two decades and has emerged as a key illumination technique for optical sectioning, super-resolution imaging, surface profiling, and quantitative phase imaging of microscale objects in cell biology and engineering. In this review, we describe various structured illumination methods in optical microscopy and explain the principles and technologies involved therein. (C) 2015 Optical Society of America
Resumo:
As medidas provisórias passaram a integrar o processo legislativo brasileiro com o advento da Constituição de 1988. A previsão inicial das medidas provisórias no art. 62 da Constituição Federal foi se tornando insatisfatória por deficiências oriundas dos pressupostos de relevância e urgência, em razão dos mesmos terem sido aferidos como muito genéricos ou subjetivos, o que acarretou sua demasiada edição. Com o intuito de restringir o poder do chefe do Executivo, foi aprovada, a Emenda Constitucional n. 32/2002, regulamentada pela Resolução 01/2002 do Congresso Nacional. Há consenso geral sobre a afronta estabelecida ao processo legislativo em função da perda de competência do Congresso derivada da excessiva edição de medidas provisórias, sendo necessário limitar essa prejudicial atuação executiva que tem estabelecido o trancamento de pauta constante nas votações no Parlamento.
Resumo:
Maia Duguine, Susana Huidobro and Nerea Madariaga (eds.)