998 resultados para Object Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process modeling – the design and use of graphical documentations of an organization’s business processes – is a key method to document and use information about the operations of businesses. Still, despite current interest in process modeling, this research area faces essential challenges. Key unanswered questions concern the impact of process modeling in organizational practice, and the mechanisms through which impacts are developed. To answer these questions and to provide a better understanding of process modeling impact, I turn to the concept of affordances. Affordances describe the possibilities for goal-oriented action that a technical object offers to a user. This notion has received growing attention from IS researchers. The purpose of my research is to further develop the IS discipline’s understanding of affordances and impacts from information objects, such as process models used by analysts for information systems analysis and design. Specifically, I seek to extend existing theory on the emergence, perception and actualization of affordances. I develop a research model that describes the process by which affordances emerge between an individual and an object, how affordances are perceived, and how they are actualized by the individual. The proposed model also explains the role of available information for the individual, and the influence of perceived actualization effort. I operationalize and test this research model empirically, using a full-cycle, mixed methods study consisting of case study and experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broad knowledge is required when a business process is modeled by a business analyst. We argue that existing Business Process Management methodologies do not consider business goals at the appropriate level. In this paper we present an approach to integrate business goals and business process models. We design a Business Goal Ontology for modeling business goals. Furthermore, we devise a modeling pattern for linking the goals to process models and show how the ontology can be used in query answering. In this way, we integrate the intentional perspective into our business process ontology framework, enriching the process description and enabling new types of business process analysis. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computerized tomography is an imaging technique which produces cross sectional map of an object from its line integrals. Image reconstruction algorithms require collection of line integrals covering the whole measurement range. However, in many practical situations part of projection data is inaccurately measured or not measured at all. In such incomplete projection data situations, conventional image reconstruction algorithms like the convolution back projection algorithm (CBP) and the Fourier reconstruction algorithm, assuming the projection data to be complete, produce degraded images. In this paper, a multiresolution multiscale modeling using the wavelet transform coefficients of projections is proposed for projection completion. The missing coefficients are then predicted based on these models at each scale followed by inverse wavelet transform to obtain the estimated projection data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An action is typically composed of different parts of the object moving in particular sequences. The presence of different motions (represented as a 1D histogram) has been used in the traditional bag-of-words (BoW) approach for recognizing actions. However the interactions among the motions also form a crucial part of an action. Different object-parts have varying degrees of interactions with the other parts during an action cycle. It is these interactions we want to quantify in order to bring in additional information about the actions. In this paper we propose a causality based approach for quantifying the interactions to aid action classification. Granger causality is used to compute the cause and effect relationships for pairs of motion trajectories of a video. A 2D histogram descriptor for the video is constructed using these pairwise measures. Our proposed method of obtaining pairwise measures for videos is also applicable for large datasets. We have conducted experiments on challenging action recognition databases such as HMDB51 and UCF50 and shown that our causality descriptor helps in encoding additional information regarding the actions and performs on par with the state-of-the art approaches. Due to the complementary nature, a further increase in performance can be observed by combining our approach with state-of-the-art approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the exploding-reflector method to simulate a monostatic survey with a single simulation. The exploding reflector, used in seismic modeling, is adapted for ground-penetrating radar (GPR) modeling by using the analogy between acoustic and electromagnetic waves. The method can be used with ray tracing to obtain the location of the interfaces and estimate the properties of the medium on the basis of the traveltimes and reflection amplitudes. In particular, these can provide a better estimation of the conductivity and geometrical details. The modeling methodology is complemented with the use of the plane-wave method. The technique is illustrated with GPR data from an excavated tomb of the nineteenth century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of patterns in constructing complex systems has long been recognised in other disciplines. In software engineering, for example, well-crafted object-oriented architectures contain several design patterns. Focusing on mechanisms of constructing software during system development can yield an architecture that is simpler, clearer and more understandable than if design patterns were ignored or not properly applied. In this paper, we propose a model that uses object-oriented design patterns to develop a core bitemporal conceptual model. We define three core design patterns that form a core bitemporal conceptual model of a typical bitemporal object. Our framework is known as the Bitemporal Object, State and Event Modelling Approach (BOSEMA) and the resulting core model is known as a Bitemporal Object, State and Event (BOSE) model. Using this approach, we demonstrate that we can enrich data modelling by using well known design patterns which can help designers to build complex models of bitemporal databases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual salience is an intriguing phenomenon observed in biological neural systems. Numerous attempts have been made to model visual salience mathematically using various feature contrasts, either locally or globally. However, these algorithmic models tend to ignore the problem’s biological solutions, in which visual salience appears to arise during the propagation of visual stimuli along the visual cortex. In this paper, inspired by the conjecture that salience arises from deep propagation along the visual cortex, we present a Deep Salience model where a multi-layer model based on successive Markov random fields (sMRF) is proposed to analyze the input image successively through its deep belief propagation. As a result, the foreground object can be automatically separated from the background in a fully unsupervised way. Experimental evaluation on the benchmark dataset validated that our Deep Salience model can consistently outperform eleven state-of-the-art salience models, yielding the higher rates in the precision-recall tests and attaining the best F-measure and mean-square error in the experiments.