971 resultados para object modeling from images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rapid judgments about the properties and spatial relations of objects are the crux of visually guided interaction with the world. Vision begins, however, with essentially pointwise representations of the scene, such as arrays of pixels or small edge fragments. For adequate time-performance in recognition, manipulation, navigation, and reasoning, the processes that extract meaningful entities from the pointwise representations must exploit parallelism. This report develops a framework for the fast extraction of scene entities, based on a simple, local model of parallel computation.sAn image chunk is a subset of an image that can act as a unit in the course of spatial analysis. A parallel preprocessing stage constructs a variety of simple chunks uniformly over the visual array. On the basis of these chunks, subsequent serial processes locate relevant scene components and assemble detailed descriptions of them rapidly. This thesis defines image chunks that facilitate the most potentially time-consuming operations of spatial analysis---boundary tracing, area coloring, and the selection of locations at which to apply detailed analysis. Fast parallel processes for computing these chunks from images, and chunk-based formulations of indexing, tracing, and coloring, are presented. These processes have been simulated and evaluated on the lisp machine and the connection machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

C.M. Onyango, J.A. Marchant and R. Zwiggelaar, 'Modelling uncertainty in agricultural image analysis', Computers and Electronics in Agriculture 17 (3), 295-305 (1997)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces ART-EMAP, a neural architecture that uses spatial and temporal evidence accumulation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised and unsupervised learning and a medium-term memory process to accomplish stable pattern category recognition in a noisy input environment. The ART-EMAP system features (i) distributed pattern registration at a view category field; (ii) a decision criterion for mapping between view and object categories which can delay categorization of ambiguous objects and trigger an evidence accumulation process when faced with a low confidence prediction; (iii) a process that accumulates evidence at a medium-term memory (MTM) field; and (iv) an unsupervised learning algorithm to fine-tune performance after a limited initial period of supervised network training. ART-EMAP dynamics are illustrated with a benchmark simulation example. Applications include 3-D object recognition from a series of ambiguous 2-D views.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Irish Pavilion at the Venice Architecture Biennale 2012 charts a position for Irish architecture in a global culture where the modes of production of architecture are radically altered. Ireland is one of the most globalised countries in the world, yet it has developed a national culture of architecture derived from local place as a material construct. We now have to evolve our understanding in the light of the globalised nature of economic processes and architectural production which is largely dependent on internationally networked flows of products, data, and knowledge. We have just begun to represent this situation to ourselves and others. How should a global architecture be grounded culturally and philosophically? How does it position itself outside of shared national reference points?
heneghan peng architects were selected as participants because they are working across three continents on a range of competition-winning projects. Several of these are in sensitive and/or symbolic sites that include three UNESCO World Heritage sites, including the Grand Egyptian Museum in Cairo, the Giants Causeway Visitor Centre in Northern Ireland, and the new Rhine Bridge near Lorelei.
Our dialogue led us to discussing the universal languages of projective geometry and number are been shared by architects and related professionals. In the work of heneghan peng, the specific embodiment of these geometries is carefully calibrated by the choice of materials and the detailed design of their physical performance on site. The stone facade of the Giant’s Causeway Visitor Centre takes precise measure of the properties of the volcanic basalt seams from which it is hewn. The extraction of the stone is the subject of the pavilion wall drawings which record the cutting of stones to create the façade of the causeway centre.
We also identified water as an element which is shared across the different sites. Venice is a perfect place to take measure of this element which suggests links to another site – the Nile Valley which was enriched by the annual flooding of the River Nile. An ancient Egyptian rod for measuring the water level of the Nile inspired the design of the Nilometre - a responsive oscillating bench that invites visitors to balance their respective weights. This action embodies the ways of thinking that are evolving to operate in the globalised world, where the autonomous architectural object is dissolving into an expanded field of conceptual rules and systems. The bench constitutes a shifting ground located in the unstable field of Venice. It is about measurement and calibration of the weight of the body in relation to other bodies; in relation to the site of the installation; and in relation to water. The exhibit is located in the Artiglierie section of the Arsenale. Its level is calibrated against the mark of the acqua alta in the adjacent brickwork of the building which embodies a liminal moment in the fluctuating level of the lagoon.
The weights of bodies, the level of water, changes over time, are constant aspects of design across cultures and collectively they constitute a common ground for architecture - a ground shared with other design professionals. The movement of the bench required complex engineering design and active collaboration between the architects, engineers and fabricators. It is a kind of prototype – a physical object produced from digital data that explores the mathematics at play – the see-saw motion invites the observer to become a participant, to give it a test drive. It shows how a simple principle can generate complex effects that are difficult to predict and invites visitors to experiment and play with them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The garment we now recognise as the Aran jumper emerged as an international symbol of Ireland from the twin twentieth century transatlantic flows of migration and tourism. Its power as a heritage object derives from: 1) the myth commonly associated with the object, in which the corpse of a drowned fisherman is identified and claimed by his family due to the stitch patterns of his jumper (Pádraig Ó Síochain 1962; Annette Lynch and Mitchell Strauss 2014); 2) the meanings attached to those stitch patterns, which have been read, for example, as genealogical records, representations of the natural landscape and references to Christian and pre-Christian ‘Celtic’ religion (Heinz Kiewe 1967; Catherine Nash 1996); and 3) booming popular interest in textile heritage on both sides of the Atlantic, fed by the reframing of domestic crafts such as knitting as privileged leisure pursuits (Rachel Maines 2009; Jo Turney 2009). The myth of the drowned fisherman plays into transatlantic migration narratives of loss and reclamation, promising a shared heritage that needs only to be decoded. The idea of the garment’s surface acting as text (or map) situates it within a preliterate idyll of romantic primitivism, while obscuring the circumstances of its manufacture. The contemporary resurgence in home textile production as recreation, mediated through transnational online networks, creates new markets for heritage textile products while attracting critical attention to the processes through which such objects, and mythologies, are produced. The Aran jumper’s associations with kinship, domesticity and national character make it a powerful tool in the promotion of ancestral (or genealogical) tourism, through marketing efforts such as The Gathering 2013. Nash’s (2010; 2014) work demonstrates the potential for such touristic encounters to disrupt and enrich public conceptions of heritage, belonging and relatedness. While the Aran jumper has been used to commodify a simplistic sense of mutuality between Ireland and north America, it carries complex transatlantic messages in both directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Face recognition from images or video footage requires a certain level of recorded image quality. This paper derives acceptable bitrates (relating to levels of compression and consequently quality) of footage with human faces, using an industry implementation of the standard H.264/MPEG-4 AVC and the Closed-Circuit Television (CCTV) recording systems on London buses. The London buses application is utilized as a case study for setting up a methodology and implementing suitable data analysis for face recognition from recorded footage, which has been degraded by compression. The majority of CCTV recorders on buses use a proprietary format based on the H.264/MPEG-4 AVC video coding standard, exploiting both spatial and temporal redundancy. Low bitrates are favored in the CCTV industry for saving storage and transmission bandwidth, but they compromise the image usefulness of the recorded imagery. In this context, usefulness is determined by the presence of enough facial information remaining in the compressed image to allow a specialist to recognize a person. The investigation includes four steps: (1) Development of a video dataset representative of typical CCTV bus scenarios. (2) Selection and grouping of video scenes based on local (facial) and global (entire scene) content properties. (3) Psychophysical investigations to identify the key scenes, which are most affected by compression, using an industry implementation of H.264/MPEG-4 AVC. (4) Testing of CCTV recording systems on buses with the key scenes and further psychophysical investigations. The results showed a dependency upon scene content properties. Very dark scenes and scenes with high levels of spatial–temporal busyness were the most challenging to compress, requiring higher bitrates to maintain useful information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos, o fácil acesso em termos de custos, ferramentas de produção, edição e distribuição de conteúdos audiovisuais, contribuíram para o aumento exponencial da produção diária deste tipo de conteúdos. Neste paradigma de superabundância de conteúdos multimédia existe uma grande percentagem de sequências de vídeo que contém material explícito, sendo necessário existir um controlo mais rigoroso, de modo a não ser facilmente acessível a menores. O conceito de conteúdo explícito pode ser caraterizado de diferentes formas, tendo o trabalho descrito neste documento incidido sobre a deteção automática de nudez feminina presente em sequências de vídeo. Este processo de deteção e classificação automática de material para adultos pode constituir uma ferramenta importante na gestão de um canal de televisão. Diariamente podem ser recebidas centenas de horas de material sendo impraticável a implementação de um processo manual de controlo de qualidade. A solução criada no contexto desta dissertação foi estudada e desenvolvida em torno de um produto especifico ligado à área do broadcasting. Este produto é o mxfSPEEDRAIL F1000, sendo este uma solução da empresa MOG Technologies. O objetivo principal do projeto é o desenvolvimento de uma biblioteca em C++, acessível durante o processo de ingest, que permita, através de uma análise baseada em funcionalidades de visão computacional, detetar e sinalizar na metadata do sinal, quais as frames que potencialmente apresentam conteúdo explícito. A solução desenvolvida utiliza um conjunto de técnicas do estado da arte adaptadas ao problema a tratar. Nestas incluem-se algoritmos para realizar a segmentação de pele e deteção de objetos em imagens. Por fim é efetuada uma análise critica à solução desenvolvida no âmbito desta dissertação de modo a que em futuros desenvolvimentos esta seja melhorada a nível do consumo de recursos durante a análise e a nível da sua taxa de sucesso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le Jeu, un phénomène difficile à définir, se manifeste en littérature de différentes manières. Le présent travail en considère deux : l’écriture à contrainte, telle que la pratique l’Oulipo, et l’écriture de l’imaginaire, en particulier les romans de Fantasy française. La première partie de cette étude présente donc, sous forme d’essai, les origines et les visées des deux groupes d’écrivains, mettant en lumière les similitudes pouvant être établies entre eux malgré leurs apparentes différences. Tandis que l’Oulipo cherche des contraintes capables de générer un nombre infini de textes et explore la langue par ce moyen, la Fantasy se veut créatrice de mondes imaginaires en puisant généralement à la source de Tolkien et des jeux de rôle. Il en résulte que le jeu, dans les deux cas, se révèle un puissant moteur de création, que le récit appelle un lecteur-explorateur et qu’il crée une infinité de mondes possibles. Malgré tout, des divergences demeurent quant à leurs critiques, leurs rapports avec le jeu et les domaines extralittéraires, et leurs visées. Considérant ce fait, je propose de combiner les deux styles d’écriture en me servant du cycle des Hortense de Jacques Roubaud (structuré au moyen de la sextine) et des Chroniques des Crépusculaires de Mathieu Gaborit (figure de proue en fantasy « pure »). Ce projet a pour but de combler le fossé restant encore entre les deux groupes. Ainsi, la seconde partie de mon travail constitue une première tentative de réunion des deux techniques d’écriture (à contrainte et de l’imaginaire). Six héros (trois aventuriers et trois mercenaires) partent à la recherche d’un objet magique dérobé à la Reine du Désert et capable de bouleverser l’ordre du monde. Le récit, divisé en six chapitres, rapporte les aventures de ce groupe jusqu’à leur rencontre avec l’ennemi juré de la Reine, un puissant sorcier elfe noir. Chaque chapitre comporte six sections plus petites où sont permutés – selon le mouvement de la sextine – six éléments caractéristiques des jeux de rôles : 1-Une description du MJ (Maître du Jeu) ; 2-Un combat ; 3-Une énigme à résoudre ou un piège à désarmer ; 4-Une discussion entre les joueurs à propos de leurs avatars ; 5-L’acquisition d’un nouvel objet ; 6-Une interaction avec un PNJ (Personnage Non Joueur). Tout au long du texte, des références aux Chroniques des Crépusculaires de Mathieu Gaborit apparaissent, suivant également un ordre sextinien. D’autres allusions, à Tolkien, Queneau, Perec ou Roubaud, agrémentent le roman.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there is an apparent shift in research from content based image retrieval (CBIR) to automatic image annotation in order to bridge the gap between low level features and high level semantics of images. Automatic Image Annotation (AIA) techniques facilitate extraction of high level semantic concepts from images by machine learning techniques. Many AIA techniques use feature analysis as the first step to identify the objects in the image. However, the high dimensional image features make the performance of the system worse. This paper describes and evaluates an automatic image annotation framework which uses SURF descriptors to select right number of features and right features for annotation. The proposed framework uses a hybrid approach in which k-means clustering is used in the training phase and fuzzy K-NN classification in the annotation phase. The performance of the system is evaluated using standard metrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo consiste en un estudio histórico y crítico de los mecanismos jurídicos de protección de las tierras de la población desplazada, en el cual se revisan las políticas públicas y la legislación que ha tenido incidencia en la materia. Así, partiendo de un breve estudio de los móviles que determinan el desplazamiento forzado, del cual concluye que la tierra ha sido un factor permanente y definitivo para el conflicto armado y el desplazamiento forzado, por lo cual se justifica abordar el desplazamiento forzado a partir de ella y la necesidad de protegerla, se adelanta un análisis crítico de las medidas que el Estado ha abordado para proteger a los desplazados en sus tierras, destacando la insuficiencia histórica del mismo, para finalmente plantear la necesidad de revaluar la política estatal al respecto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo tiene como objetivo hacer una revisión teórica desde los inicios de la Teoría General de Sistemas, hasta el desarrollo de la dinámica de sistemas, todo lo que estas dos teorías plantean, además de muchas otras teorías de las cuales estas se encuentran rodeadas y conectadas muy cercanamente, ya que partiendo del planteamiento que hizo el biólogo Ludwig Von Bertalanffy se derivaron diversas corrientes de pensamiento, todas alrededor del enfoque y pensamiento de sistemas, y de la cual se deriva la más importante para nosotros en este trabajo de la cual es padre el Ingeniero Jay Wright Forrester, la Dinámica de Sistemas. Muchas de estas otras teorías tienen como objetivo generar estrategias de cambio en las organizaciones mediante diferentes metodologías, pero todas con el fin de alcanzar un desempeño óptimo en los procesos que son realizados en las organizaciones. Este proyecto es planteado ante la necesidad de tener un amplio conocimiento teórico de todo lo que conllevan los planteamientos de la Dinámica de Sistemas los cuales van de la mano junto con el pensamiento sistémico y el modelamiento organizacional, el cual veremos cómo es desarrollado desde el enfoque de la Dinámica de Sistemas a través de distintos modelos y en la actualidad herramientas informáticas tales como software diseñados exclusivamente para el modelamiento desde la Dinámica de Sistemas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces perspex algebra which is being developed as a common representation of geometrical knowledge. A perspex can currently be interpreted in one of four ways. First, the algebraic perspex is a generalization of matrices, it provides the most general representation for all of the interpretations of a perspex. The algebraic perspex can be used to describe arbitrary sets of coordinates. The remaining three interpretations of the perspex are all related to square matrices and operate in a Euclidean model of projective space-time, called perspex space. Perspex space differs from the usual Euclidean model of projective space in that it contains the point at nullity. It is argued that the point at nullity is necessary for a consistent account of perspective in top-down vision. Second, the geometric perspex is a simplex in perspex space. It can be used as a primitive building block for shapes, or as a way of recording landmarks on shapes. Third, the transformational perspex describes linear transformations in perspex space that provide the affine and perspective transformations in space-time. It can be used to match a prototype shape to an image, even in so called 'accidental' views where the depth of an object disappears from view, or an object stays in the same place across time. Fourth, the parametric perspex describes the geometric and transformational perspexes in terms of parameters that are related to everyday English descriptions. The parametric perspex can be used to obtain both continuous and categorical perception of objects. The paper ends with a discussion of issues related to using a perspex to describe logic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a series of collage made from images of mega yachts, the Future Monument looks at the possibility of taking late capitalism more seriously as an ideology than it takes itself seriously. The project asks whether the private display of wealth and power represented by the yacht can be appropriated for a new language of public sculpture. The choreographed live performance of the construction of the large scale monument was scripted to a proposed capitalist manifesto and took place in a public square in Herzliya, Israel. It aimed to articulate the ideology latent in capitalism’s claims to a neutral manifestation of human nature. The Future Monument project was developed through reading seminars taking place at Goldsmiths College, as part of a research strand headed by Pil and Galia Kollectiv on irony and overidentification within the Political Currency of Art research group. This research has so far produced a series of silk screen collage prints, a sculpture commissioned by the Essex Council and a live performance commissioned by the Herzliya Biennale. However, the project is ongoing, with future outputs planned including a curated exhibition and conference in 2012, in collaboration with Matthew Poole, Programme Director of the Centre for Curatorial Studies at the University of Essex.