835 resultados para image-based rendering
Resumo:
Natural History filmmaking has a long history but the generic boundaries between it and environmental and conservation filmmaking are blurred. Nature, environment and animal imagery has been a mainstay of television, campaigning organisations and conservation bodies from Greenpeace to the Sierra Club, with vibrant images being used effectively on posters, leaflets and postcards, and in coffee table books, media releases, short films and viral emails to educate and inform the general public. However, critics suggest that wildlife film and photography frequently convey a false image of the state of the world’s flora and fauna. The environmental educator David Orr once remarked that all education is environmental education, and it is possible to see all image-based communication in the same way. The Media, Animal Conservation and Environmental Education has contributions from filmmakers, photographers, researchers and academics from across the globe. It explores the various ways in which film, television and video are, and can be, used by conservationists and educators to encourage both a greater awareness of environmental and conservation issues, and practical action designed to help endangered species. This book is based on a special issue of the journal Environmental Education Research.
Resumo:
Natural History filmmaking has a long history but the generic boundaries between it and environmental and conservation filmmaking are blurred. Nature, environment and animal imagery has been a mainstay of television, campaigning organisations and conservation bodies from Greenpeace to the Sierra Club, with vibrant images being used effectively on posters, leaflets and postcards, and in coffee table books, media releases, short films and viral emails to educate and inform the general public. However, critics suggest that wildlife film and photography frequently convey a false image of the state of the world’s flora and fauna. The environmental educator David Orr once remarked that all education is environmental education, and it is possible to see all image-based communication in the same way. The Media, Animal Conservation and Environmental Education has contributions from filmmakers, photographers, researchers and academics from across the globe. It explores the various ways in which film, television and video are, and can be, used by conservationists and educators to encourage both a greater awareness of environmental and conservation issues, and practical action designed to help endangered species. This book is based on a special issue of the journal Environmental Education Research.
Resumo:
Natural History filmmaking has a long history but the generic boundaries between it and environmental and conservation filmmaking are blurred. Nature, environment and animal imagery has been a mainstay of television, campaigning organisations and conservation bodies from Greenpeace to the Sierra Club, with vibrant images being used effectively on posters, leaflets and postcards, and in coffee table books, media releases, short films and viral emails to educate and inform the general public. However, critics suggest that wildlife film and photography frequently convey a false image of the state of the world’s flora and fauna. The environmental educator David Orr once remarked that all education is environmental education, and it is possible to see all image-based communication in the same way. The Media, Animal Conservation and Environmental Education has contributions from filmmakers, photographers, researchers and academics from across the globe. It explores the various ways in which film, television and video are, and can be, used by conservationists and educators to encourage both a greater awareness of environmental and conservation issues, and practical action designed to help endangered species. This book is based on a special issue of the journal Environmental Education Research.
Resumo:
Natural History filmmaking has a long history but the generic boundaries between it and environmental and conservation filmmaking are blurred. Nature, environment and animal imagery has been a mainstay of television, campaigning organisations and conservation bodies from Greenpeace to the Sierra Club, with vibrant images being used effectively on posters, leaflets and postcards, and in coffee table books, media releases, short films and viral emails to educate and inform the general public. However, critics suggest that wildlife film and photography frequently convey a false image of the state of the world’s flora and fauna. The environmental educator David Orr once remarked that all education is environmental education, and it is possible to see all image-based communication in the same way. The Media, Animal Conservation and Environmental Education has contributions from filmmakers, photographers, researchers and academics from across the globe. It explores the various ways in which film, television and video are, and can be, used by conservationists and educators to encourage both a greater awareness of environmental and conservation issues, and practical action designed to help endangered species. This book is based on a special issue of the journal Environmental Education Research.
Resumo:
The article describes researches of a method of person recognition by face image based on Gabor wavelets. Scales of Gabor functions are determined at which the maximal percent of recognition for search of a person in a database and minimal percent of mistakes due to false alarm errors when solving an access control task is achieved. The carried out researches have shown a possibility of improvement of recognition system work parameters in the specified two modes when the volume of used data is reduced.
Resumo:
A tenet of modern radiotherapy (RT) is to identify the treatment target accurately, following which the high-dose treatment volume may be expanded into the surrounding tissues in order to create the clinical and planning target volumes. Respiratory motion can induce errors in target volume delineation and dose delivery in radiation therapy for thoracic and abdominal cancers. Historically, radiotherapy treatment planning in the thoracic and abdominal regions has used 2D or 3D images acquired under uncoached free-breathing conditions, irrespective of whether the target tumor is moving or not. Once the gross target volume has been delineated, standard margins are commonly added in order to account for motion. However, the generic margins do not usually take the target motion trajectory into consideration. That may lead to under- or over-estimate motion with subsequent risk of missing the target during treatment or irradiating excessive normal tissue. That introduces systematic errors into treatment planning and delivery. In clinical practice, four-dimensional (4D) imaging has been popular in For RT motion management. It provides temporal information about tumor and organ at risk motion, and it permits patient-specific treatment planning. The most common contemporary imaging technique for identifying tumor motion is 4D computed tomography (4D-CT). However, CT has poor soft tissue contrast and it induce ionizing radiation hazard. In the last decade, 4D magnetic resonance imaging (4D-MRI) has become an emerging tool to image respiratory motion, especially in the abdomen, because of the superior soft-tissue contrast. Recently, several 4D-MRI techniques have been proposed, including prospective and retrospective approaches. Nevertheless, 4D-MRI techniques are faced with several challenges: 1) suboptimal and inconsistent tumor contrast with large inter-patient variation; 2) relatively low temporal-spatial resolution; 3) it lacks a reliable respiratory surrogate. In this research work, novel 4D-MRI techniques applying MRI weightings that was not used in existing 4D-MRI techniques, including T2/T1-weighted, T2-weighted and Diffusion-weighted MRI were investigated. A result-driven phase retrospective sorting method was proposed, and it was applied to image space as well as k-space of MR imaging. Novel image-based respiratory surrogates were developed, improved and evaluated.
Resumo:
This work explores the development of MemTri. A memory forensics triage tool that can assess the likelihood of criminal activity in a memory image, based on evidence data artefacts generated by several applications. Fictitious illegal suspect activity scenarios were performed on virtual machines to generate 60 test memory images for input into MemTri. Four categories of applications (i.e. Internet Browsers, Instant Messengers, FTP Client and Document Processors) are examined for data artefacts located through the use of regular expressions. These identified data artefacts are then analysed using a Bayesian Network, to assess the likelihood that a seized memory image contained evidence of illegal activity. Currently, MemTri is under development and this paper introduces only the basic concept as well as the components that the application is built on. A complete description of MemTri coupled with extensive experimental results is expected to be published in the first semester of 2017.
Resumo:
The present thesis is a study of movie review entertainment (MRE) which is a contemporary Internet-based genre of texts. MRE are movie reviews in video form which are published online, usually as episodes of an MRE web show. Characteristic to MRE is combining humor and honest opinions in varying degrees as well as the use of subject materials, i.e. clips of the movies, as a part of the review. The study approached MRE from a linguistic perspective aiming to discover 1) whether MRE is primarily text- or image-based and what the primary functions of the modes are, 2) how a reviewer linguistically combines subject footage to her/his commentary?, 3) whether there is any internal variation in MRE regarding the aforementioned questions, and 4) how suitable the selected models and theories are in the analysis of this type of contemporary multimodal data. To answer the aforementioned questions, the multimodal system of image—text relations by Martinec and Salway (2005) in combination with categories of cohesion by Halliday and Hasan (1976) were applied to four full MRE videos which were transcribed in their entirety for the study. The primary data represent varying types of MRE: a current movie review, an analytic essay, a riff review, and a humorous essay. The results demonstrated that image vs. text prioritization can vary between reviews and also within a review. The current movie review and the two essays were primarily commentary-focused whereas the riff review was significantly more dependent on the use of imagery as the clips are a major source of humor which is a prominent value in that type of a review. In addition to humor, clips are used to exemplify the commentary. A reviewer also relates new information to the imagery as well as uses two modes to present the information in a review. Linguistically, the most frequent case was that the reviewer names participants and processes lexically in the commentary. Grammatical relations (reference items such as pronouns and adverbs and conjunctive items in the riff review) were also encountered. There was internal variation to a considerable degree. The methods chosen were deemed appropriate to answer the research questions. Further study could go beyond linguistics to include, for instance, genre and media studies.
Resumo:
Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes.
Resumo:
Myocardial fibrosis detected via delayed-enhanced magnetic resonance imaging (MRI) has been shown to be a strong indicator for ventricular tachycardia (VT) inducibility. However, little is known regarding how inducibility is affected by the details of the fibrosis extent, morphology, and border zone configuration. The objective of this article is to systematically study the arrhythmogenic effects of fibrosis geometry and extent, specifically on VT inducibility and maintenance. We present a set of methods for constructing patient-specific computational models of human ventricles using in vivo MRI data for patients suffering from hypertension, hypercholesterolemia, and chronic myocardial infarction. Additional synthesized models with morphologically varied extents of fibrosis and gray zone (GZ) distribution were derived to study the alterations in the arrhythmia induction and reentry patterns. Detailed electrophysiological simulations demonstrated that (1) VT morphology was highly dependent on the extent of fibrosis, which acts as a structural substrate, (2) reentry tended to be anchored to the fibrosis edges and showed transmural conduction of activations through narrow channels formed within fibrosis, and (3) increasing the extent of GZ within fibrosis tended to destabilize the structural reentry sites and aggravate the VT as compared to fibrotic regions of the same size and shape but with lower or no GZ. The approach and findings represent a significant step toward patient-specific cardiac modeling as a reliable tool for VT prediction and management of the patient. Sensitivities to approximation nuances in the modeling of structural pathology by image-based reconstruction techniques are also implicated.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
Given the importance of color processing in computer vision and computer graphics, estimating and rendering illumination spectral reflectance of image scenes is important to advance the capability of a large class of applications such as scene reconstruction, rendering, surface segmentation, object recognition, and reflectance estimation. Consequently, this dissertation proposes effective methods for reflection components separation and rendering in single scene images. Based on the dichromatic reflectance model, a novel decomposition technique, named the Mean-Shift Decomposition (MSD) method, is introduced to separate the specular from diffuse reflectance components. This technique provides a direct access to surface shape information through diffuse shading pixel isolation. More importantly, this process does not require any local color segmentation process, which differs from the traditional methods that operate by aggregating color information along each image plane. ^ Exploiting the merits of the MSD method, a scene illumination rendering technique is designed to estimate the relative contributing specular reflectance attributes of a scene image. The image feature subset targeted provides a direct access to the surface illumination information, while a newly introduced efficient rendering method reshapes the dynamic range distribution of the specular reflectance components over each image color channel. This image enhancement technique renders the scene illumination reflection effectively without altering the scene’s surface diffuse attributes contributing to realistic rendering effects. ^ As an ancillary contribution, an effective color constancy algorithm based on the dichromatic reflectance model was also developed. This algorithm selects image highlights in order to extract the prominent surface reflectance that reproduces the exact illumination chromaticity. This evaluation is presented using a novel voting scheme technique based on histogram analysis. ^ In each of the three main contributions, empirical evaluations were performed on synthetic and real-world image scenes taken from three different color image datasets. The experimental results show over 90% accuracy in illumination estimation contributing to near real world illumination rendering effects. ^
Resumo:
The present work reports the porous alumina structures fabrication and their quantitative structural characteristics study based on mathematical morphology analysis by using the SEM images. The algorithm used in this work was implemented in 6.2 MATLAB software. Using the algorithm it was possible to obtain the distribution of maximum, minimum and average radius of the pores in porous alumina structures. Additionally, with the calculus of the area occupied by the pores, it was possible to obtain the porosity of the structures. The quantitative results could be obtained and related to the process fabrication characteristics, showing to be reliable and promising to be used to control the pores formation process. Then, this technique could provide a more accurate determination of pore sizes and pores distribution. (C) 2008 Elsevier Ltd. All rights reserved.