36 resultados para visual manual

em Boston University Digital Common


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Boston University Theology Library

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://www.archive.org/details/manualofmissions014078mbp

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DSpace is an open source software platform that enables organizations to: - Capture and describe digital material using a submission workflow module, or a variety of programmatic ingest options - Distribute an organization's digital assets over the web through a search and retrieval system - Preserve digital assets over the long term This system documentation includes a functional overview of the system, which is a good introduction to the capabilities of the system, and should be readable by nontechnical personnel. Everyone should read this section first because it introduces some terminology used throughout the rest of the documentation. For people actually running a DSpace service, there is an installation guide, and sections on configuration and the directory structure. Note that as of DSpace 1.2, the administration user interface guide is now on-line help available from within the DSpace system. Finally, for those interested in the details of how DSpace works, and those potentially interested in modifying the code for their own purposes, there is a detailed architecture and design section.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(adapted from the DSpace Procedures Manual developed by Kalamazoo College Digital Archive)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quadsim is an intermediate code simulator. It allows you to "run" programs that your compiler generates in intermediate code format. Its user interface is similar to most debuggers in that you can step through your program, instruction by instruction, set breakpoints, examine variable values, and so on. The intermediate code format used by Quadsim is that described in [Aho 86]. If your compiler generates intermediate code in this format, you will be able to take intermediate-code files generated by your compiler, load them into the simulator, and watch them "run." You are provided with functions that hide the internal representation of intermediate code. You can use these functions within your compiler to generate intermediate code files that can be read by the simulator. Quadsim was inspired and greatly influenced by [Aho 86]. The material in chapter 8 (Intermediate Code Generation) of [Aho 86] should be considered background material for users of Quadsim.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An iterative method for reconstructing a 3D polygonal mesh and color texture map from multiple views of an object is presented. In each iteration, the method first estimates a texture map given the current shape estimate. The texture map and its associated residual error image are obtained via maximum a posteriori estimation and reprojection of the multiple views into texture space. Next, the surface shape is adjusted to minimize residual error in texture space. The surface is deformed towards a photometrically-consistent solution via a series of 1D epipolar searches at randomly selected surface points. The texture space formulation has improved computational complexity over standard image-based error approaches, and allows computation of the reprojection error and uncertainty for any point on the surface. Moreover, shape adjustments can be constrained such that the recovered model's silhouette matches those of the input images. Experiments with real world imagery demonstrate the validity of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lehar's lively discussion builds on a critique of neural models of vision that is incorrect in its general and specific claims. He espouses a Gestalt perceptual approach, rather than one consistent with the "objective neurophysiological state of the visual system" (p. 1). Contemporary vision models realize his perceptual goals and also quantitatively explain neurophysiological and anatomical data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perceptual grouping is well-known to be a fundamental process during visual perception, notably grouping across scenic regions that do not receive contrastive visual inputs. Illusory contours are a classical example of such groupings. Recent psychophysical and neurophysiological evidence have shown that the grouping process can facilitate rapid synchronization of the cells that are bound together by a grouping, even when the grouping must be completed across regions that receive no contrastive inputs. Synchronous grouping can hereby bind together different object parts that may have become desynchronized due to a variety of factors, and can enhance the efficiency of cortical transmission. Neural models of perceptual grouping have clarified how such fast synchronization may occur by using bipole grouping cells, whose predicted properties have been supported by psychophysical, anatomical, and neurophysiological experiments. These models have not, however, incorporated some of the realistic constraints on which groupings in the brain are conditioned, notably the measured spatial extent of long-range interactions in layer 2/3 of a grouping network, and realistic synaptic and axonal signaling delays within and across cells in different cortical layers. This work addresses the question: Can long-range interactions that obey the bipole constraint achieve fast synchronization under realistic anatomical and neurophysiological constraints that initially desynchronize grouping signals? Can the cells that synchronize retain their analog sensitivity to changing input amplitudes? Can the grouping process complete and synchronize illusory contours across gaps in bottom-up inputs? Our simulations show that the answer to these questions is Yes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.