25 resultados para Data sources detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies of mobile Web trends show the continued explosion of mobile-friend content. However, the wide number and heterogeneity of mobile devices poses several challenges for Web programmers, who want automatic delivery of context and adaptation of the content to mobile devices. Hence, the device detection phase assumes an important role in this process. In this chapter, the authors compare the most used approaches for mobile device detection. Based on this study, they present an architecture for detecting and delivering uniform m-Learning content to students in a Higher School. The authors focus mainly on the XML device capabilities repository and on the REST API Web Service for dealing with device data. In the former, the authors detail the respective capabilities schema and present a new caching approach. In the latter, they present an extension of the current API for dealing with it. Finally, the authors validate their approach by presenting the overall data and statistics collected through the Google Analytics service, in order to better understand the adherence to the mobile Web interface, its evolution over time, and the main weaknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe a low cost distributed system intended to increase the positioning accuracy of outdoor navigation systems based on the Global Positioning System (GPS). Since the accuracy of absolute GPS positioning is insufficient for many outdoor navigation tasks, another GPS based methodology – the Differential GPS (DGPS) – was developed in the nineties. The differential or relative positioning approach is based on the calculation and dissemination of the range errors of the received GPS satellites. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that, first, establishes Internet links with remote DGPS sources and, then, performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate outdoor navigation tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A crescente utilização de dispositivos móveis com diferentes finalidades é uma realidade. Com estes dispositivos, o utilizador tem a necessidade de aceder e usar dados em tempo real provenientes de diversas fontes. Uma tendência acentuada passa pela incorporação destes dispositivos móveis no vestuário, designados por dispositivos wearable. Segundo a empresa IMS Research, o mercado deste tipo de dispositivos irá aumentar de 14 milhões de unidades registadas no presente ano (2013), para cerca de 171 milhões em 2016, sendo esta previsão conservadora, segundo o analista da IMS Research, Theo Ahadome [15]. A maioria dos dispositivos portáteis está atualmente projetada para questões de saúde, como a monitorização do nível de glicose e batimento cardíaco. O objetivo deste trabalho passa por definir e implementar um dispositivo wearable para aplicações de saúde com um conjunto de funcionalidades para monitorização dos sinais vitais do utilizador. Posteriormente esta base pode ser aplicada em cenários de aplicação distintos, em que todos os dispositivos comunicam entre si, e fazem o reencaminhamento da informação para onde mais interessar. Foi desenhado e implementado hardware e software, para a construção de aplicações capazes de realizar a monitorização do batimento cardíaco, temperatura e humidade corporal, deteção de quedas, qualidade do sono, e chamadas de emergência. Este trabalho aborda os diferentes cenários e aplicações da utilização deste dispositivo, invocando as necessidades específicas de cada situação, sendo estas necessidades trabalhadas e transformadas em características e especificações do sistema. A plataforma de hardware e software permite criar um ecossistema de aplicações, permitindo usar todas as infraestruturas do sistema desenvolvido em futuros cenários de aplicação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente trabalho enquadra-se na temática de segurança contra incêndio em edifícios e consiste num estudo de caso de projeto de deteção e extinção de incêndio num Data Center. Os objetivos deste trabalho resumem-se à realização de um estudo sobre o estado da arte da extinção e deteção automática de incêndio, ao desenvolvimento de uma ferramenta de software de apoio a projetos de extinção por agentes gasosos, como também à realização de um estudo e uma análise da proteção contra incêndios em Data Centers. Por último foi efetuado um estudo de caso. São abordados os conceitos de fogo e de incêndio, em que um estudo teórico à temática foi desenvolvido, descrevendo de que forma pode o fogo ser originado e respetivas consequências. Os regulamentos nacionais relativos à Segurança Contra Incêndios em Edifícios (SCIE) são igualmente abordados, com especial foco nos Sistemas Automáticos de Deteção de Incêndio (SADI) e nos Sistemas Automáticos de Extinção de Incêndio (SAEI), as normas nacionais e internacionais relativas a esta temática também são mencionadas. Pelo facto de serem muito relevantes para o desenvolvimento deste trabalho, os sistemas de deteção de incêndio são exaustivamente abordados, mencionando características de equipamentos de deteção, técnicas mais utilizadas como também quais os aspetos a ter em consideração no dimensionamento de um SADI. Quanto aos meios de extinção de incêndio foram mencionados quais os mais utilizados atualmente, as suas vantagens e a que tipo de fogo se aplicam, com especial destaque para os SAEI com utilização de gases inertes, em que foi descrito como deve ser dimensionado um sistema deste tipo. Foi também efetuada a caracterização dos Data Centers para que seja possível entender quais as suas funcionalidades, a importância da sua existência e os aspetos gerais de uma proteção contra incêndio nestas instalações. Por último, um estudo de caso foi desenvolvido, um SADI foi projetado juntamente com um SAEI que utiliza azoto como gás de extinção. As escolhas e os sistemas escolhidos foram devidamente justificados, tendo em conta os regulamentos e normas em vigor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide use of antibiotics in aquaculture has led to the emergence of resistant microbial species. It should be avoided/minimized by controlling the amount of drug employed in fish farming. For this purpose, the present work proposes test-strip papers aiming at the detection/semi-quantitative determination of organic drugs by visual comparison of color changes, in a similar analytical procedure to that of pH monitoring by universal pH paper. This is done by establishing suitable chemical changes upon cellulose, attributing the paper the ability to react with the organic drug and to produce a color change. Quantitative data is also enabled by taking a picture and applying a suitable mathematical treatment to the color coordinates given by the HSL system used by windows. As proof of concept, this approach was applied to oxytetracycline (OXY), one of the antibiotics frequently used in aquaculture. A bottom-up modification of paper was established, starting by the reaction of the glucose moieties on the paper with 3-triethoxysilylpropylamine (APTES). The so-formed amine layer allowed binding to a metal ion by coordination chemistry, while the metal ion reacted after with the drug to produce a colored compound. The most suitable metals to carry out such modification were selected by bulk studies, and the several stages of the paper modification were optimized to produce an intense color change against the concentration of the drug. The paper strips were applied to the analysis of spiked environmental water, allowing a quantitative determination for OXY concentrations as low as 30 ng/mL. In general, this work provided a simple, method to screen and discriminate tetracycline drugs, in aquaculture, being a promising tool for local, quick and cheap monitoring of drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prostate Specific Antigen (PSA) is the biomarker of choice for screening prostate cancer throughout the population, with PSA values above 10 ng/mL pointing out a high probability of associated cancer1. According to the most recent World Health Organization (WHO) data, prostate cancer is the commonest form of cancer in men in Europe2. Early detection of prostate cancer is thus very important and is currently made by screening PSA in men over 45 years old, combined with other alterations in serum and urine parameters. PSA is a glycoprotein with a molecular mass of approximately 32 kDa consisting of one polypeptide chain, which is produced by the secretory epithelium of human prostate. Currently, the standard methods available for PSA screening are immunoassays like Enzyme-Linked Immunoabsorbent Assay (ELISA). These methods are highly sensitive and specific for the detection of PSA, but they require expensive laboratory facilities and high qualify personal resources. Other highly sensitive and specific methods for the detection of PSA have also become available and are in its majority immunobiosensors1,3-5, relying on antibodies. Less expensive methods producing quicker responses are thus needed, which may be achieved by synthesizing artificial antibodies by means of molecular imprinting techniques. These should also be coupled to simple and low cost devices, such as those of the potentiometric kind, one approach that has been proven successful6. Potentiometric sensors offer the advantage of selectivity and portability for use in point-of-care and have been widely recognized as potential analytical tools in this field. The inherent method is simple, precise, accurate and inexpensive regarding reagent consumption and equipment involved. Thus, this work proposes a new plastic antibody for PSA, designed over the surface of graphene layers extracted from graphite. Charged monomers were used to enable an oriented tailoring of the PSA rebinding sites. Uncharged monomers were used as control. These materials were used as ionophores in conventional solid-contact graphite electrodes. The obtained results showed that the imprinted materials displayed a selective response to PSA. The electrodes with charged monomers showed a more stable and sensitive response, with an average slope of -44.2 mV/decade and a detection limit of 5.8X10-11 mol/L (2 ng/mL). The corresponding non-imprinted sensors showed smaller sensitivity, with average slopes of -24.8 mV/decade. The best sensors were successfully applied to the analysis of serum samples, with percentage recoveries of 106.5% and relatives errors of 6.5%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Astringency is an organoleptic property of beverages and food products resulting mainly from the interaction of salivary proteins with dietary polyphenols. It is of great importance to consumers, but the only effective way of measuring it involves trained sensorial panellists, providing subjective and expensive responses. Concurrent chemical evaluations try to screen food astringency, by means of polyphenol and protein precipitation procedures, but these are far from the real human astringency sensation where not all polyphenol–protein interactions lead to the occurrence of precipitate. Here, a novel chemical approach that tries to mimic protein–polyphenol interactions in the mouth is presented to evaluate astringency. A protein, acting as a salivary protein, is attached to a solid support to which the polyphenol binds (just as happens when drinking wine), with subsequent colour alteration that is fully independent from the occurrence of precipitate. Employing this simple concept, Bovine Serum Albumin (BSA) was selected as the model salivary protein and used to cover the surface of silica beads. Tannic Acid (TA), employed as the model polyphenol, was allowed to interact with the BSA on the silica support and its adsorption to the protein was detected by reaction with Fe(III) and subsequent colour development. Quantitative data of TA in the samples were extracted by colorimetric or reflectance studies over the solid materials. The analysis was done by taking a regular picture with a digital camera, opening the image file in common software and extracting the colour coordinates from HSL (Hue, Saturation, Lightness) and RGB (Red, Green, Blue) colour model systems; linear ranges were observed from 10.6 to 106.0 μmol L−1. The latter was based on the Kubelka–Munk response, showing a linear gain with concentrations from 0.3 to 10.5 μmol L−1. In either of these two approaches, semi-quantitative estimation of TA was enabled by direct eye comparison. The correlation between the levels of adsorbed TA and the astringency of beverages was tested by using the assay to check the astringency of wines and comparing these to the response of sensorial panellists. Results of the two methods correlated well. The proposed sensor has significant potential as a robust tool for the quantitative/semi-quantitative evaluation of astringency in wine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

13th International Conference on Autonomous Robot Systems (Robotica), 2013