826 resultados para 2D barcode based authentication scheme
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Generic object recognition is an important function of the human visual system and everybody finds it highly useful in their everyday life. For an artificial vision system it is a really hard, complex and challenging task because instances of the same object category can generate very different images, depending of different variables such as illumination conditions, the pose of an object, the viewpoint of the camera, partial occlusions, and unrelated background clutter. The purpose of this thesis is to develop a system that is able to classify objects in 2D images based on the context, and identify to which category the object belongs to. Given an image, the system can classify it and decide the correct categorie of the object. Furthermore the objective of this thesis is also to test the performance and the precision of different supervised Machine Learning algorithms in this specific task of object image categorization. Through different experiments the implemented application reveals good categorization performances despite the difficulty of the problem. However this project is open to future improvement; it is possible to implement new algorithms that has not been invented yet or using other techniques to extract features to make the system more reliable. This application can be installed inside an embedded system and after trained (performed outside the system), so it can become able to classify objects in a real-time. The information given from a 3D stereocamera, developed inside the department of Computer Engineering of the University of Bologna, can be used to improve the accuracy of the classification task. The idea is to segment a single object in a scene using the depth given from a stereocamera and in this way make the classification more accurate.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
We present an overview of the stack-based memory management techniques that we used in our non-deterministic and-parallel Prolog systems: &-Prolog and DASWAM. We believe that the problems associated with non-deterministic and-parallel systems are more general than those encountered in or-parallel and deterministic and-parallel systems, which can be seen as subsets of this more general case. We develop on the previously proposed "marker scheme", lifting some of the restrictions associated with the selection of goals while keeping (virtual) memory consumption down. We also review some of the other problems associated with the stack-based management scheme, such as handling of forward and backward execution, cut, and roll-backs.
Resumo:
Flows of relevance to new generation aerospace vehicles exist, which are weakly dependent on the streamwise direction and strongly dependent on the other two spatial directions, such as the flow around the (flattened) nose of the vehicle and the associated elliptic cone model. Exploiting these characteristics, a parabolic integration of the Navier-Stokes equations is more appropriate than solution of the full equations, resulting in the so-called Parabolic Navier-Stokes (PNS). This approach not only is the best candidate, in terms of computational efficiency and accuracy, for the computation of steady base flows with the appointed properties, but also permits performing instability analysis and laminar-turbulent transition studies a-posteriori to the base flow computation. This is to be contrasted with the alternative approach of using order-of-magnitude more expensive spatial Direct Numerical Simulations (DNS) for the description of the transition process. The PNS equations used here have been formulated for an arbitrary coordinate transformation and the spatial discretization is performed using a novel stable high-order finite-difference-based numerical scheme, ensuring the recovery of highly accurate solutions using modest computing resources. For verification purposes, the boundary layer solution around a circular cone at zero angle of attack is compared in the incompressible limit with theoretical profiles. Also, the recovered shock wave angle at supersonic conditions is compared with theoretical predictions in the same circular-base cone geometry. Finally, the entire flow field, including shock position and compressible boundary layer around a 2:1 elliptic cone is recovered at Mach numbers 3 and 4
Resumo:
Video-based vehicle detection is the focus of increasing interest due to its potential towards collision avoidance. In particular, vehicle verification is especially challenging due to the enormous variability of vehicles in size, color, pose, etc. In this paper, a new approach based on supervised learning using Principal Component Analysis (PCA) is proposed that addresses the main limitations of existing methods. Namely, in contrast to classical approaches which train a single classifier regardless of the relative position of the candidate (thus ignoring valuable pose information), a region-dependent analysis is performed by considering four different areas. In addition, a study on the evolution of the classification performance according to the dimensionality of the principal subspace is carried out using PCA features within a SVM-based classification scheme. Indeed, the experiments performed on a publicly available database prove that PCA dimensionality requirements are region-dependent. Hence, in this work, the optimal configuration is adapted to each of them, rendering very good vehicle verification results.
Resumo:
The difficulties perceived in the orientation-based unified scheme models, when confronted with the observational data, are pointed out. It is shown that in meter-wavelength selected samples, which presumably are largely free of an orientation bias, the observed numbers of quasars versus radio galaxies are not in accordance with the expectations of the unified scheme models. The observed number ratios seem to depend heavily on the redshift, fluxdensity, or radio luminosity levels of the selected sample. This cannot be explained within the simple orientation-based unified scheme with a fixed average value of the half-opening angle (c approximately 45 degrees ) for the obscuring torus that supposedly surrounds the nuclear optical continuum and the broad-line regions. Further, the large differences seen between radio galaxies and quasars in their size distributions in the luminosity-redshift plane could not be accommodated even if I were to postulate some suitable cosmological evolution of the opening angle of the torus. Some further implications of these observational results for the recently proposed modified versions of the unified scheme model are pointed out.
Resumo:
This paper considers the problem of inducing low-risk individuals of all ages to buy private health insurance in Australia. Our proposed subsidy scheme improves upon the age-based penalty scheme under the current "Australian Lifetime Cover" (LTC) scheme. We generate an alternative subsidy profile that obviates adverse selection in private health insurance markets with mandated, age-based, community rating. Our proposal is novel in that we generate subsidies that are both risk- and age-specific, based upon actual risk probabilities. The approach we take may prove useful in other jurisdictions where the extant law mandates community rating in private health insurance markets. Furthermore, our approach is useful in jurisdictions that seek to maintain private insurance to complement existing universal public systems.
Resumo:
This paper is the initial part of a comprehensive bipartite monograph of palynomorphs (viz., acritarchs, prasinophyte phycomata, and chitinozoans) that are represented profusely in marine lower Palaeozoic strata of the Canning Basin, Western Australia. The prime aim is to establish a palynologically based zonal scheme for the Ordovician sequence as represented in five cored boreholes drilled through the Lower to Middle Ordovician strata of the central-northeastern Canning Basin. These strata embrace the Oepikodus communis through Phragmodus-Plectodina conodont zonal interval and comprise (in ascending order) the Willara, Goldwyer, and Nita formations, of inferred early Arenig to Llanvirn age. All three formations contain moderately diverse and variably preserved palynomorphs. The palynomorph taxa, detailed systematically in the current Part One of this monograph, comprise 66 species of acritarchs and six of prasinophytes. Of these, two species of prasinophytes and 11 of acritarchs are newly established: Cymatiosphaera meandrica and Pterospermella franciniae; Aremoricanium hyalinum, A. solaris, Baltisphaeridium tenuicomatum, Gorgonisphaeridium crebrum, Lophosphaeridium aequalium, L. aspersum, Micrhystridium infrequens, Pylantios hadrus, Sertulidium amplexum, Striatotheca indistincta, and Tribulidium globosum. Pylantios (typified by P. hadrus), Sertulidium (typified by S. amplexum), and Tribulidium (typified by T globosum); are defined as new acritarch genera. Three new combinations are instituted: Baltisphaeridium pugiatum (PLAYFORD & MARTIN 1984), Polygonium canningianum (COMRAZ & PENIGUEL 1972), and Sacculidium furtivum (PLAYFORD & MARTIN 1984); and Ammonidium macilentum PLAYFORD & MARTIN 1984 and Sacculidium furtivum (PLAYFORD & MARTIN 1984) are emended. An appreciable number of palynomorph species are not formally named owing to lack of sufficient or adequately preserved specimens; others are compared but not positively identified with previously instituted species. The ensuing Part Two of this study will complete the systematic-descriptive documentation, i.e., chitinozoans, and evaluate the Canning Basin palynoflora in terms of its chronological and stratigraphic-correlative significance.
Resumo:
Distributed source coding (DSC) has recently been considered as an efficient approach to data compression in wireless sensor networks (WSN). Using this coding method multiple sensor nodes compress their correlated observations without inter-node communications. Therefore energy and bandwidth can be efficiently saved. In this paper, we investigate a randombinning based DSC scheme for remote source estimation in WSN and its performance of estimated signal to distortion ratio (SDR). With the introduction of a detailed power consumption model for wireless sensor communications, we quantitatively analyze the overall network energy consumption of the DSC scheme. We further propose a novel energy-aware transmission protocol for the DSC scheme, which flexibly optimizes the DSC performance in terms of either SDR or energy consumption, by adapting the source coding and transmission parameters to the network conditions. Simulations validate the energy efficiency of the proposed adaptive transmission protocol. © 2007 IEEE.
Resumo:
In this paper, we study the management and control of service differentiation and guarantee based on enhanced distributed function coordination (EDCF) in IEEE 802.11e wireless LANs. Backoff-based priority schemes are the major mechanism for Quality of Service (QoS) provisioning in EDCF. However, control and management of the backoff-based priority scheme are still challenging problems. We have analysed the impacts of backoff and Inter-frame Space (IFS) parameters of EDCF on saturation throughput and service differentiation. A centralised QoS management and control scheme is proposed. The configuration of backoff parameters and admission control are studied in the management scheme. The special role of access point (AP) and the impact of traffic load are also considered in the scheme. The backoff parameters are adaptively re-configured to increase the levels of bandwidth guarantee and fairness on sharing bandwidth. The proposed management scheme is evaluated by OPNET. Simulation results show the effectiveness of the analytical model based admission control scheme. ©2005 IEEE.
Resumo:
This dissertation proposed a self-organizing medium access control protocol (MAC) for wireless sensor networks (WSNs). The proposed MAC protocol, space division multiple access (SDMA), relies on sensor node position information and provides sensor nodes access to the wireless channel based on their spatial locations. SDMA divides a geographical area into space divisions, where there is one-to-one map between the space divisions and the time slots. Therefore, the MAC protocol requirement is the sensor node information of its position and a prior knowledge of the one-to-one mapping function. The scheme is scalable, self-maintaining, and self-starting. It provides collision-free access to the wireless channel for the sensor nodes thereby, guarantees delay-bounded communication in real time for delay sensitive applications. This work was divided into two parts: the first part involved the design of the mapping function to map the space divisions to the time slots. The mapping function is based on a uniform Latin square. A Uniform Latin square of order k = m 2 is an k x k square matrix that consists of k symbols from 0 to k-1 such that no symbol appears more than once in any row, in any column, or in any m x in area of main subsquares. The uniqueness of each symbol in the main subsquares presents very attractive characteristic in applying a uniform Latin square to time slot allocation problem in WSNs. The second part of this research involved designing a GPS free positioning system for position information. The system is called time and power based localization scheme (TPLS). TPLS is based on time difference of arrival (TDoA) and received signal strength (RSS) using radio frequency and ultrasonic signals to measure and detect the range differences from a sensor node to three anchor nodes. TPLS requires low computation overhead and no time synchronization, as the location estimation algorithm involved only a simple algebraic operation.
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.