914 resultados para GUI legacy Windows Form web-application


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the vector space of algebraic curvature operators we study the reaction ODE which is associated to the evolution equation of the Riemann curvature operator along the Ricci flow. More precisely, we give a partial classification of the zeros of this ODE up to suitable normalization and analyze the stability of a special class of zeros of the same. In particular, we show that the ODE is unstable near the curvature operators of the Riemannian product spaces where is an Einstein (locally) symmetric space of compact type and not a spherical space form when .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Documentos Históricos – III. Reseñas Escritos de: Danilo H. Di Persia Edmundo C. Drago Hugo Luis López Aldo A. Mariazzi Juan José Neiff Santiago R. Olivier Páginas WEB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.

The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES]Este proyecto tiene como objetivo el diseño e implementación de una herramienta para la integración de los datos de calidad de servicio (QoS) en Internet publicados por el regulador español. Se trata de una herramienta que pretende, por una parte, unificar los diferentes formatos en que se publican los datos de QoS y, por otra, facilitar la conservación de los datos favoreciendo la obtención de históricos, datos estadísticos e informes. En la página del regulador sólo se puede acceder a los datos de los 5 últimos trimestres y los datos anteriormente publicados no permanecen accesibles si no que son sustituidos por los más recientes por lo que, desde el punto de vista del usuario final, estos datos se pierden. La herramienta propuesta en este trabajo soluciona este problema además de unificar formatos y facilitar el acceso a los datos de interés. Para el diseño del sistema se han usado las últimas tecnologías en desarrollo de aplicaciones web con lo que la potencia y posibilidad de futuras ampliaciones son elevadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental investigation of the optical properties of β–gallium oxide has been carried out, covering the wavelength range 220-2500 nm.

The refractive index and birefringence have been determined to about ± 1% accuracy over the range 270-2500 nm, by the use of a technique based on the occurrence of fringes in the transmission of a thin sample due to multiple internal reflections in the sample (ie., the "channelled spectrum" of the sample.)

The optical absorption coefficient has been determined over the range 220 - 300 nm, which range spans the fundamental absorption edge of β – Ga2O3. Two techniques were used in the absorption coefficient determination: measurement of transmission of a thin sample, and measurement of photocurrent from a Schottky barrier formed on the surface of a sample. Absorption coefficient was measured over a range from 10 to greater than 105, to an accuracy of better than ± 20%. The absorption edge was found to be strongly polarization-dependent.

Detailed analyses are presented of all three experimental techniques used. Experimentally determined values of the optical constants are presented in graphical form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O problema que justifica o presente estudo refere-se à falta de semântica nos mecanismos de busca na Web. Para este problema, o consórcio W3 vem desenvolvendo tecnologias que visam construir uma Web Semântica. Entre estas tecnologias, estão as ontologias de domínio. Neste sentido, o objetivo geral desta dissertação é discutir as possibilidades de se imprimir semântica às buscas nos agregadores de notícia da Web. O objetivo específico é apresentar uma aplicação que usa uma classificação semi-automática de notícias, reunindo, para tanto, as tecnologias de busca da área de recuperação de informação com as ontologias de domínio. O sistema proposto é uma aplicação para a Web capaz de buscar notícias sobre um domínio específico em portais de informação. Ela utiliza a API do Google Maps V1 para a localização georreferenciada da notícia, sempre que esta informação estiver disponível. Para mostrar a viabilidade da proposta, foi desenvolvido um exemplo apoiado em uma ontologia para o domínio de chuvas e suas consequências. Os resultados obtidos por este novo Feed de base ontológica são alocados em um banco de dados e disponibilizados para consulta via Web. A expectativa é que o Feed proposto seja mais relevante em seus resultados do que um Feed comum. Os resultados obtidos com a união de tecnologias patrocinadas pelo consórcio W3 (XML, RSS e ontologia) e ferramentas de busca em página Web foram satisfatórios para o propósito pretendido. As ontologias mostram-se como ferramentas de usos múltiplos, e seu valor de análise em buscas na Web pode ser ampliado com aplicações computacionais adequadas para cada caso. Como no exemplo apresentado nesta dissertação, à palavra chuva agregaram-se outros conceitos, que estavam presentes nos desdobramentos ocasionados por ela. Isto realçou a ligação do evento chuva com as consequências que ela provoca - ação que só foi possível executar através de um recorte do conhecimento formal envolvido.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge management is a critical issue for the next-generation web application, because the next-generation web is becoming a semantic web, a knowledge-intensive network. XML Topic Map (XTM), a new standard, is appearing in this field as one of the structures for the semantic web. It organizes information in a way that can be optimized for navigation. In this paper, a new set of hyper-graph operations on XTM (HyO-XTM) is proposed to manage the distributed knowledge resources.HyO-XTM is based on the XTM hyper-graph model. It is well applied upon XTM to simplify the workload of knowledge management.The application of the XTM hyper-graph operations is demonstrated by the knowledge management system of a consulting firm. HyO-XTM shows the potential to lead the knowledge management to the next-generation web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Migrating legacy system with web service is an effective and economic way of reusing legacy software in a SOA environment.In this paper,we present an approach for migrating a three-tie object-oriented legacy system to SOA environment.The key issue of the approach is about services identification from large numbers of classes.And we propose a bottom-up method to model the system with UML and identify services from UML then.This approach can be a reference to an auto-migrating process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CSR运行数据组织软件系统是CSR工程控制系统的重要组成部分,它是CSR同步控制系统的最上层,是CSR调束中统领整个CSR运行的核心系统,负责对CSR运行设备的数据进行组织和管理、对同步事例进行组织和调度,从而实现对其同步控制。本系统是建立在各个子系统硬件同步基础之上的,通过使用数据组织软件来使这些具有同步功能的各个子系统的硬件按照调速研究人员的思路同步协调地运行。本文主要解决了CSR控制工程中CSR运行数据组织软件系统的设计和实现问题,软件实现方案有两种:第一种是运行在本机的文件数据库和Windows客户;第二种是运行在网络上的Oracle数据库和Web客户。这两种解决方案都是建立在数据库和网络技术上的。同时,数据库的实时性与可靠性也是建立在网络技术基础之上的。文本的创新点:通过数据预先计算、分发技术、同步事例组织技术,降低了实时性对CSR控制系统,特别是对高速以太网的要求,并实现了具有精确定时的控制和监测硬件的同步触发,使得CSR控制系统完全能够满足控制和监测的实时性要求。CSR运行数据组织软件的可靠性是通过TCP/IP网络协议技术以及控制硬件的内置软件的可靠性来保证的

Relevância:

100.00% 100.00%

Publicador:

Resumo:

介绍一种基于工业以太网通信技术及Windows平台构建的遥控潜水器控制系统.将该控制系统应用于最新研制的遥控潜水器中,其在通信能力、视频传输、控制性能、硬件的可扩展性、数据的存储与显示等多方面都比传统的控制系统具有明显的优势.在水池中进行了试验,验证了该控制系统及整个潜水器良好的运动功能和性能.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Engenharia Informática, ramo de Computação Móvel

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies of animal movement are rapidly increasing as tracking technologies make it possible to collect more data of a larger variety of species. Comparisons of animal movement across sites, times, or species are key to asking questions about animal adaptation, responses to climate and land-use change. Thus, great gains can be made by sharing and exchanging animal tracking data. Here we present an animal movement data model that we use within the Movebank web application to describe tracked animals. The model facilitates data comparisons across a broad range of taxa, study designs, and technologies, and is based on the scientific questions that could be addressed with the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet Tra c, Internet Applications, Internet Attacks, Tra c Pro ling, Multi-Scale Analysis abstract Nowadays, the Internet can be seen as an ever-changing platform where new and di erent types of services and applications are constantly emerging. In fact, many of the existing dominant applications, such as social networks, have appeared recently, being rapidly adopted by the user community. All these new applications required the implementation of novel communication protocols that present di erent network requirements, according to the service they deploy. All this diversity and novelty has lead to an increasing need of accurately pro ling Internet users, by mapping their tra c to the originating application, in order to improve many network management tasks such as resources optimization, network performance, service personalization and security. However, accurately mapping tra c to its originating application is a di cult task due to the inherent complexity of existing network protocols and to several restrictions that prevent the analysis of the contents of the generated tra c. In fact, many technologies, such as tra c encryption, are widely deployed to assure and protect the con dentiality and integrity of communications over the Internet. On the other hand, many legal constraints also forbid the analysis of the clients' tra c in order to protect their con dentiality and privacy. Consequently, novel tra c discrimination methodologies are necessary for an accurate tra c classi cation and user pro ling. This thesis proposes several identi cation methodologies for an accurate Internet tra c pro ling while coping with the di erent mentioned restrictions and with the existing encryption techniques. By analyzing the several frequency components present in the captured tra c and inferring the presence of the di erent network and user related events, the proposed approaches are able to create a pro le for each one of the analyzed Internet applications. The use of several probabilistic models will allow the accurate association of the analyzed tra c to the corresponding application. Several enhancements will also be proposed in order to allow the identi cation of hidden illicit patterns and the real-time classi cation of captured tra c. In addition, a new network management paradigm for wired and wireless networks will be proposed. The analysis of the layer 2 tra c metrics and the di erent frequency components that are present in the captured tra c allows an e cient user pro ling in terms of the used web-application. Finally, some usage scenarios for these methodologies will be presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relatório da prática de ensino supervisionada, Mestrado em Ensino da Informática, Universidade de Lisboa, 2014