891 resultados para Templates client-side
Resumo:
O crescente poder computacional dos dispositivos móveis e a maior eficiência dos navegadores fomentam a construção de aplicações Web mais rápidas e fluídas, através da troca assíncrona de dados em vez de páginas HTML completas. A OutSystems Platform é um ambiente de desenvolvimento usado para a construção rápida e validada de aplicaçõesWeb, que integra numa só linguagem a construção de interfaces de utilizador, lógica da aplicação e modelo de dados. O modelo normal de interação cliente-servidor da plataforma é coerente com o ciclo completo de pedido-resposta, embora seja possível implementar, de forma explícita, aplicações assíncronas. Neste trabalho apresentamos um modelo de separação, baseado em análise estática sobre a definição de uma aplicação, entre os dados apresentados nas páginas geradas pela plataforma e o código correspondente à sua estrutura e apresentação. Esta abordagem permite a geração automática e transparente de interfaces de utilizador mais rápidas e fluídas, a partir do modelo de uma aplicação OutSystems. O modelo apresentado, em conjunto com a análise estática, permite identificar o subconjunto mínimo dos dados a serem transmitidos na rede para a execução de uma funcionalidade no servidor, e isolar a execução de código no cliente. Como resultado da utilização desta abordagem obtém-se uma diminuição muito significativa na transmissão de dados, e possivelmente uma redução na carga de processamento no servidor, dado que a geração das páginasWeb é delegada no cliente, e este se torna apto para executar código. Este modelo é definido sobre uma linguagem, inspirada na da plataforma OutSystems, a partir da qual é implementado um gerador de código. Neste contexto, uma linguagem de domínio específico cria uma camada de abstração entre a definição do modelo de uma aplicação e o respetivo código gerado, tornando transparente a criação de templates clientside e o código executado no cliente e no servidor.
Resumo:
Service oriented architectures (SOA) based on Simple Object Access Protocol (SOAP) Web services have attracted the attention of enterprises mainly for business-to-business integration and to create composite applications that execute business processes. An existing problem is the lack of preoccupation with non technical users due to the fact that to create a composite application to fulfill users needs, it is necessary to be in contact with IT staff. To overcome this issue, enterprises can take advantage of web 2.0, 'introducing in the development stage some technologies like mashups and some concepts like user empowerment, collaborative work and collective intelligence. Some results [3] [13] have shown how web 2.0 concepts can help non technical users to produce relative complex business processes. However, traditional enterprise requirements goes beyond typical web 2.0 solutions in several aspects: (1) traditional enterprise systems are based on heterogeneous stack of technologies that are not directly exploitable from a web-based client (where SOAP web services play an important role); (2) web browsers set some cross-domain security constraints making difficult to integrate services from diverse domains. In this paper, a contribution to two web 2.0 research projects [14] [15] partially solves the problems described: provide a way to invoke cross-domain backend services (based on SOAP technologies) directly only using clientside languages, without a need for any adaptation layer. © 2010 ACM.
Resumo:
Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.
Resumo:
This document describes the first bundle of core WP2 (user data analytics) client side components, including their specifications, usecases, and working prototypes. Included assets contain a description of their current status, and links to their full designs and downloadable versions. This deliverable only describes operational SW assets (even though beta) that are tested and documented. It should be noted, however, that various additional software assets (2.2d Cognitive Capacity Measurement and 2.3a Realtime Emotion Detection) are near completion for inclusion in games during the first pilot round. Those assets are still scheduled for inclusion in the final bundle deliverable D2.2.
Resumo:
We discuss from a practical point of view a number of ssues involved in writing distributed Internet and WWW applications using LP/CLP systems. We describe PiLLoW, a publicdomain Internet and WWW programming library for LP/CLP systems that we have designed in order to simplify the process of writing such applications. PiLLoW provides facilities for accessing documents and code on the WWW; parsing, manipulating and generating HTML and XML structured documents and data; producing HTML forms; writing form handlers and CGI-scripts; and processing HTML/XML templates. An important contribution of PÍ'LLOW is to model HTML/XML code (and, thus, the content of WWW pages) as terms. The PÍ'LLOW library has been developed in the context of the Ciao Prolog system, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality. We also describe the use of concurrency and a highlevel model of client-server interaction, Ciao Prolog's active modules, in the context of WWW programming. We propose a solution for client-side downloading and execution of Prolog code, using generic browsers. Finally, we also provide an overview of related work on the topic.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
Os laboratórios de experimentação remota estão normalmente associados a tecnologias ou soluções proprietárias, as quais restringem a sua utilização a determinadas plataformas e obrigam ao uso de software específico no lado do cliente. O ISEP possui um laboratório de experimentação remota, baseado em instrumentação virtual, usado no apoio ao ensino da electrónica e construído sobre uma plataforma NIELVIS da National Instruments. O software de controlo da plataforma recorre à linguagem gráfica de programação LabVIEW. Esta é uma ferramenta desenvolvida pela National Instruments que facilita o desenvolvimento de aplicações de sistemas de experimentação remota, mas que possui várias limitações, nomeadamente a necessidade de instalação do lado do cliente de um plug-in, cuja disponibilidade se encontra limitada a determinadas versões de sistemas operativos e de Web Browsers. A experiência anterior demonstrou que estas questões limitam o número de clientes com possibilidade de acesso ao laboratório remoto, para além de, em alguns casos, se ter verificado não ser transparente a sua instalação e utilização. Neste contexto, o trabalho de investigação consistiu no desenvolvimento de uma solução que permite a geração de interfaces que possibilitam o controlo remoto do sistema implementado, e que, ao mesmo tempo, são independentes da plataforma usada pelo cliente.
Resumo:
Users of wireless devices increasingly demand access to multimedia content with speci c quality of service requirements. Users might tolerate di erent levels of service, or could be satis ed with di erent quality combinations choices. However, multimedia processing introduces heavy resource requirements on the client side. Our work tries to address the growing demand on resources and performance requirements, by allowing wireless nodes to cooperate with each other to meet resource allocation requests and handle stringent constraints, opportunistically taking advantage of the local ad-hoc network that is created spontaneously, as nodes move in range of each other, forming a temporary coalition for service execution. Coalition formation is necessary when a single node cannot execute a speci c service, but it may also be bene cial when groups perform more e ciently when compared to a single s node performance.
Resumo:
Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.
Resumo:
The goal of the work presented in this paper is to provide mobile platforms within our campus with a GPS based data service capable of supporting precise outdoor navigation. This can be achieved by providing campus-wide access to real time Differential GPS (DGPS) data. As a result, we designed and implemented a three-tier distributed system that provides Internet data links between remote DGPS sources and the campus and a campus-wide DGPS data dissemination service. The Internet data link service is a two-tier client/server where the server-side is connected to the DGPS station and the client-side is located at the campus. The campus-wide DGPS data provider disseminates the DGPS data received at the campus via the campus Intranet and via a wireless data link. The wireless broadcast is intended for portable receivers equipped with a DGPS wireless interface and the Intranet link is provided for receivers with a DGPS serial interface. The application is expected to provide adequate support for accurate outdoor campus navigation tasks.
Resumo:
Although the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) is, de facto, the standard positioning system used in outdoor navigation, it does not provide, per se, all the features required to perform many outdoor navigational tasks. The accuracy of the GPS measurements is the most critical issue. The quest for higher position readings accuracy led to the development, in the late nineties, of the Differential Global Positioning System (DGPS). The differential GPS method detects the range errors of the GPS satellites received and broadcasts them. The DGPS/GPS receivers correlate the DGPS data with the GPS satellite data they are receiving, granting users increased accuracy. DGPS data is broadcasted using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to have access, within the ISEP campus, to DGPS correction data. To achieve this objective we designed and implemented a distributed system composed of two main modules which are interconnected: a distributed application responsible for the establishment of the data link over the Internet between the remote DGPS stations and the campus, and the campus-wide DGPS data server application. The DGPS data Internet link is provided by a two-tier client/server distributed application where the server-side is connected to the DGPS station and the client-side is located at the campus. The second unit, the campus DGPS data server application, diffuses DGPS data received at the campus via the Intranet and via a wireless data link. The wireless broadcast is intended for DGPS/GPS portable receivers equipped with an air interface and the Intranet link is provided for DGPS/GPS receivers with just a RS232 DGPS data interface. While the DGPS data Internet link servers receive the DGPS data from the DGPS base stations and forward it to the DGPS data Internet link client, the DGPS data Internet link client outputs the received DGPS data to the campus DGPS data server application. The distributed system is expected to provide adequate support for accurate (sub-metric) outdoor campus navigation tasks. This paper describes in detail the overall distributed application.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Hoje em dia existem múltiplas aplicações multimédia na Internet, sendo comum qualquer website apresentar mais de uma forma de visualização de informação além do texto como, por exemplo: imagens, áudio, vídeo e animação. Com aumento do consumo e utilização de Smartphone e Tablets, o volume de tráfego de internet móvel tem vindo a crescer rapidamente, bem como o acesso à internet através da televisão. As aplicações web-based ganham maior relevância devido à maior partilha ou consumo de conteúdos multimédia, com ou sem edição ou manipulação da mesma, através de redes sociais, como o Facebook. Neste documento é apresentado o estudo de alternativas HTML5 e a implementação duma aplicação web-based no âmbito do Mestrado de Engenharia Informática, ramo de Sistemas Gráficos e Multimédia, no Instituto Superior Engenharia do Porto (ISEP). A aplicação tem como objetivo a edição e manipulação de imagens, tanto em desktop como em dispositivos móveis, sendo este processo exclusivamente feito no lado do cliente, ou seja, no Browser do utilizador. O servidor é usado somente para o armazenamento da aplicação. Durante o desenvolvimento do projeto foi realizado um estudo de soluções de edição e manipulação de imagem existentes no mercado, com a respetiva análise de comparação e apresentadas tecnologias Web modernas como HTML5, CSS3 e JavaScript, que permitirão desenvolver o protótipo. Posteriormente, serão apresentadas, detalhadamente, as várias fases do desenvolvimento de um protótipo, desde a análise do sistema, à apresentação do protótipo e indicação das tecnologias utilizadas. Também serão apresentados os resultados dos inquéritos efetuados a um grupo de pessoas que testaram esse protótipo. Finalmente, descrever-se-á de forma mais exaustiva, a implementação e serão apontadas dificuldades encontradas ao longo do desenvolvimento, bem como indicadas futuras melhorias a introduzir.