72 resultados para Facial Object Based Method

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the content-based image retrieval techniques is the shape-based technique, which allows users to ask for objects similar in shape to a query object. Sajjanhar and Lu proposed a method for shape representation and similarity measure called the grid-based method [1]. They have shown that the method is effective for the retrieval of segmented objects based on shape. In this paper, we describe a system which uses the grid-based method for retrieval of images with multiple objects. We perform experiments on the prototype system to compare the performance of the grid-based method with the Fourier descriptors method [2]. Preliminary results have been presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the convergence of paper to electronic, the health industry is relying more on technology to maintain and update the well-being of patients. This reliance on technology requires an acute level of protection from
unwanted technological disasters and/or human threats. Research shows insufficiencies with the implementation and use of security controls; as well as current analysis methods lacking the techniques to analyse technical and social aspects of security. The aim of this paper is to introduce an information security evaluation methodology for health information systems based on UML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Requirements engineering is a commencing phase in the development of either software applications or information systems. It is concerned with understanding and specifying the customer's requirements of the system to be delivered. Throughout the literature, this is agreed to be one of the most crucial and, unfortunately, problematic phases in development. Despite the diversity of research directions, approaches and methods, the question of process understanding and management is still limited. Among contemporary approaches to the improvement of the current practice of Requirements Engineering, Formal Object-Oriented Method (FOOM) has been introduced as a new promising solution. The FOOM approach to requirements engineering is based on a synthesis of socio-organisational theory, the object-oriented approach, and mathematical formal specification. The entire FOOM specification process is evolutionary and involves a large volume of changes in requirements. During this process, requirements evolve through various forms of informal, semi-formal, and formal while maintaining a semantic link between these forms and, most importantly, conforming to the customer's requirements. A deep understanding of the complexity of the requirements model and its dynamics is critical in improving requirements engineering process management. This thesis investigates the benefits of documenting both the evolution of the requirements model and the rationale for that evolution. Design explanation explains and justifies the deliberations of, and decisions made during, the design activity. In this thesis, design explanation is used to describe the requirements engineering process in order to improve understandability of, and traceability within, the evolving requirements specification. The design explanation recorded during this research project is also useful in assisting the researcher in gaining insights into the creativity and opportunistic characteristics of the requirements engineering process. This thesis offers an interpretive investigation into incorporating design explanation within FOOM in order to extend and advantage the method. The researcher's interpretation and analysis of collected data highlight an insight-driven and opportunistic process rather than a strictly and systematically predefined one. In fact, the process was not smoothly evolutionary, but involved occasional 'crisis' points at which the model was reconceptualised, simplified and restructured. Therefore, contributions of the thesis lie not only in an effective incorporation of design explanation within FOOM, but also a deep understanding of the dynamic process of requirements engineering. The new understanding of the complexity of the requirements model and its dynamics suggests new directions for future research and forms a basis for a new approach to process management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sport video data is growing rapidly as a result of the maturing digital technologies that support digital video capture, faster data processing, and large storage. However, (1) semi-automatic content extraction and annotation, (2) scalable indexing model, and (3) effective retrieval and browsing, still pose the most challenging problems for maximizing the usage of large video databases. This article will present the findings from a comprehensive work that proposes a scalable and extensible sports video retrieval system with two major contributions in the area of sports video indexing and retrieval. The first contribution is a new sports video indexing model that utilizes semi-schema-based indexing scheme on top of an Object-Relationship approach. This indexing model is scalable and extensible as it enables gradual index construction which is supported by ongoing development of future content extraction algorithms. The second contribution is a set of novel queries which are based on XQuery to generate dynamic and user-oriented summaries and event structures. The proposed sports video retrieval system has been fully implemented and populated with soccer, tennis, swimming, and diving video. The system has been evaluated against 20 users to demonstrate and confirm its feasibility and benefits. The experimental sports genres were specifically selected to represent the four main categories of sports domain: period-, set-point-, time (race)-, and performance-based sports. Thus, the proposed system should be generic and robust for all types of sports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The delta technique has been proposed in literature for constructing
prediction intervals for targets estimated by neural networks. Quality of constructed prediction intervals using this technique highly depends on neural network characteristics. Unfortunately, literature is void of information about how these dependences can be managed in order to optimize prediction intervals. This study attempts to optimize length and coverage probability of prediction intervals through modifying structure and parameters of the underlying neural networks. In an evolutionary optimization, genetic algorithm is applied for finding the optimal values of network size and training hyper-parameters. The applicability and efficiency of the proposed optimization technique is examined and demonstrated using a real case study. It is shown that application of the proposed optimization technique significantly improves quality of constructed prediction intervals in term of length and coverage probability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a salience-based technique for the annotation of directly quoted speech from fiction text. In particular, this paper determines to what extent a naïve (without the use of complex machine learning or knowledge-based techniques) scoring technique can be used for the identification of the speaker of speech quotes. The presented technique makes use of a scoring technique, similar to that commonly found in knowledge-poor anaphora resolution research, as well as a set of hand-coded rules for the final identification of the speaker of each quote in the text. Speaker identification is shown to be achieved using three tasks: the identification of a speech-verb associated with a quote with a recall of 94.41%; the identification of the actor associated with a quote with a recall of 88.22%; and the selection of a speaker with an accuracy of 79.40%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a projection pursuit (PP) based method for blind separation of nonnegative sources. First, the available observation matrix is mapped to construct a new mixing model, in which the inaccessible source matrix is normalized to be column-sum-to-1. Then, the PP method is proposed to solve this new model, where the mixing matrix is estimated column by column through tracing the projections to the mapped observations in specified directions, which leads to the recovery of the sources. The proposed method is much faster than Chan's method, which has similar assumptions to ours, due to the usage of optimal projection. It is also more advantageous in separating cross-correlated sources than the independence- and uncorrelation-based methods, as it does not employ any statistical information of the sources. Furthermore, the new method does not require the mixing matrix to be nonnegative. Simulation results demonstrate the superior performance of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, significant effort has been given to predicting protein functions from protein interaction data generated from high throughput techniques. However, predicting protein functions correctly and reliably still remains a challenge. Recently, many computational methods have been proposed for predicting protein functions. Among these methods, clustering based methods are the most promising. The existing methods, however, mainly focus on protein relationship modeling and the prediction algorithms that statically predict functions from the clusters that are related to the unannotated proteins. In fact, the clustering itself is a dynamic process and the function prediction should take this dynamic feature of clustering into consideration. Unfortunately, this dynamic feature of clustering is ignored in the existing prediction methods. In this paper, we propose an innovative progressive clustering based prediction method to trace the functions of relevant annotated proteins across all clusters that are generated through the progressive clustering of proteins. A set of prediction criteria is proposed to predict functions of unannotated proteins from all relevant clusters and traced functions. The method was evaluated on real protein interaction datasets and the results demonstrated the effectiveness of the proposed method compared with representative existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new Fuzzy Set (FS) ranking method (for type-1 and interval type-2 FSs), which is based on the Dempster-Shafer Theory (DST) of evidence with fuzzy targets, is investigated. Fuzzy targets are adopted to reflect human viewpoints on fuzzy ranking. Two important measures in DST, i.e., the belief and plausibility measures, are used to rank FSs. The proposed approach is evaluated with several benchmark examples. The use of the belief and plausibility measures in fuzzy ranking are discussed and compared. We further analyze the capability of the proposed approach in fulfilling six reasonable fuzzy ordering properties as discussed in [9]-[11].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low glomerular (nephron) endowment has been associated with an increased risk of cardiovascular and renal disease in adulthood. Nephron endowment in humans is determined by 36 wk of gestation, while in rats and mice nephrogenesis ends several days after birth. Specific genes and environmental perturbations have been shown to regulate nephron endowment. Until now, design-based method for estimating nephron number in developing kidneys was unavailable. This was due in part to the difficulty associated with unambiguously identifying developing glomeruli in histological sections. Here, we describe a method that uses lectin histochemistry to identify developing glomeruli and the physical disector/fractionator principle to provide unbiased estimates of total glomerular number (Nglom). We have characterized Nglom throughout development in kidneys from 76 rats and model this development with a 5-parameter logistic equation to predict Nglom from embryonic day 17.25 to adulthood (r2 = 0.98). This approach represents the first design-based method with which to estimate Nglom in the developing kidney.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new non-parametric method for uncertainty quantification through construction of prediction intervals (PIs). The method takes the left and right end points of the type-reduced set of an interval type-2 fuzzy logic system (IT2FLS) model as the lower and upper bounds of a PI. No assumption is made in regard to the data distribution, behaviour, and patterns when developing intervals. A training method is proposed to link the confidence level (CL) concept of PIs to the intervals generated by IT2FLS models. The new PI-based training algorithm not only ensures that PIs constructed using IT2FLS models satisfy the CL requirements, but also reduces widths of PIs and generates practically informative PIs. Proper adjustment of parameters of IT2FLSs is performed through the minimization of a PI-based objective function. A metaheuristic method is applied for minimization of the non-linear non-differentiable cost function. Performance of the proposed method is examined for seven synthetic and real world benchmark case studies with homogenous and heterogeneous noise. The demonstrated results indicate that the proposed method is capable of generating high quality PIs. Comparative studies also show that the performance of the proposed method is equal to or better than traditional neural network-based methods for construction of PIs in more than 90% of cases. The superiority is more evident for the case of data with a heterogeneous noise. © 2014 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this paper is on handling non-monotone information in the modelling process of a single-input target monotone system. On one hand, the monotonicity property is a piece of useful prior (or additional) information which can be exploited for modelling of a monotone target system. On the other hand, it is difficult to model a monotone system if the available information is not monotonically-ordered. In this paper, an interval-based method for analysing non-monotonically ordered information is proposed. The applicability of the proposed method to handling a non-monotone function, a non-monotone data set, and an incomplete and/or non-monotone fuzzy rule base is presented. The upper and lower bounds of the interval are firstly defined. The region governed by the interval is explained as a coverage measure. The coverage size represents uncertainty pertaining to the available information. The proposed approach constitutes a new method to transform non-monotonic information to interval-valued monotone system. The proposed interval-based method to handle an incomplete and/or non-monotone fuzzy rule base constitutes a new fuzzy reasoning approach.