869 resultados para Web Applications Engineering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our modular approach to data hiding is an innovative concept in the data hiding research field. It enables the creation of modular digital watermarking methods that have extendable features and are designed for use in web applications. The methods consist of two types of modules – a basic module and an application-specific module. The basic module mainly provides features which are connected with the specific image format. As JPEG is a preferred image format on the Internet, we have put a focus on the achievement of a robust and error-free embedding and retrieval of the embedded data in JPEG images. The application-specific modules are adaptable to user requirements in the concrete web application. The experimental results of the modular data watermarking are very promising. They indicate excellent image quality, satisfactory size of the embedded data and perfect robustness against JPEG transformations with prespecified compression ratios. ACM Computing Classification System (1998): C.2.0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OWL-S is an application of OWL, the Web Ontology Language, that describes the semantics of Web Services so that their discovery, selection, invocation and composition can be automated. The research literature reports the use of UML diagrams for the automatic generation of Semantic Web Service descriptions in OWL-S. This paper demonstrates a higher level of automation by generating complete complete Web applications from OWL-S descriptions that have themselves been generated from UML. Previously, we proposed an approach for processing OWL-S descriptions in order to produce MVC-based skeletons for Web applications. The OWL-S ontology undergoes a series of transformations in order to generate a Model-View-Controller application implemented by a combination of Java Beans, JSP, and Servlets code, respectively. In this paper, we show in detail the documents produced at each processing step. We highlight the connections between OWL-S specifications and executable code in the various Java dialects and show the Web interfaces that result from this process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web service-based application is an architectural style, where a collection of Web services communicates to each other to execute processes. With the popularity increase of developing Web service-based application and once Web services may change, in terms of functional and non-functional Quality of Service (QoS), we need mechanisms to monitor, diagnose, and repair Web services into a Web Application. This work presents a description of self-healing architecture that deals with these mechanisms. Other contributions of this paper are using the proxy server to measure Web service QoS values and to employ some strategies to recovery the effects from misbehaved Web services. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current model-driven Web Engineering approaches (such as OO-H, UWE or WebML) provide a set of methods and supporting tools for a systematic design and development of Web applications. Each method addresses different concerns using separate models (content, navigation, presentation, business logic, etc.), and provide model compilers that produce most of the logic and Web pages of the application from these models. However, these proposals also have some limitations, especially for exchanging models or representing further modeling concerns, such as architectural styles, technology independence, or distribution. A possible solution to these issues is provided by making model-driven Web Engineering proposals interoperate, being able to complement each other, and to exchange models between the different tools. MDWEnet is a recent initiative started by a small group of researchers working on model-driven Web Engineering (MDWE). Its goal is to improve current practices and tools for the model-driven development of Web applications for better interoperability. The proposal is based on the strengths of current model-driven Web Engineering methods, and the existing experience and knowledge in the field. This paper presents the background, motivation, scope, and objectives of MDWEnet. Furthermore, it reports on the MDWEnet results and achievements so far, and its future plan of actions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Business Intelligence (BI) applications have been gradually ported to the Web in search of a global platform for the consumption and publication of data and services. On the Internet, apart from techniques for data/knowledge management, BI Web applications need interfaces with a high level of interoperability (similar to the traditional desktop interfaces) for the visualisation of data/knowledge. In some cases, this has been provided by Rich Internet Applications (RIA). The development of these BI RIAs is a process traditionally performed manually and, given the complexity of the final application, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of these applications, as they demonstrated by other types of Web applications. In the light of these issues, the paper introduces the Sm4RIA-B methodology, i.e., a model-driven methodology for the development of RIA as BI Web applications. In order to overcome the limitations of RIA regarding knowledge management from the Web, this paper also presents a new RIA platform for BI, called RI@BI, which extends the functionalities of traditional RIAs by means of Semantic Web technologies and B2B techniques. Finally, we evaluate the whole approach on a case study—the development of a social network site for an enterprise project manager.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada al Politecnico de Milano, Itàlia, entre gener i juny del 2006. Un dels principals objectius de l’Enginyeria del Programari és automatitzar el màxim possible el procés de desenvolupament del programari, reduint costos mitjançant la generació automàtica del programari a partir de la seva especificació. Per assolir-ho, entre altres, cal resoldre el problema de la comprovació eficient de restriccions, que són una part fonamental de l’especificació del programari. Aquest és precisament l’àmbit en què s’està desenvolupant una tesi que presentarà un mètode que poden integrar totes les eines generadores de codi per tal d’assolir una implementació eficient de les restriccions d’integritat. En l’actual fase del projecte s’ha treballat per validar el mètode de la tesi, optimitzant-lo pel cas específic de les aplicacions web i estendre’l per poder tractar també aplicacions basades en workflows. Pel que fa a l’optimització del mètode per aplicacions web, s’han definit una sèrie de paràmetres que permeten configurar la implementació del mètode tenint en compte les necessitats específiques de rendiment de cada aplicació web en particular. Respecte als workflows (cada cop més populars i que s’usen com a definició d’alt nivell per a les aplicacions a desenvolupar) s’ha estudiat quins són els tipus de restriccions que impliquen i com després es pot aplicar el mètode de la tesi sobre aquestes restriccions per tal de generar de forma eficient també les aplicacions basades en workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web service-based application is an architectural style, where a collection of Web services communicate to each other to execute processes. With the popularity increase of Web service-based applications and since messages exchanged inside of this applications can be complex, we need tools to simplify the understanding of interrelationship among Web services. This work present a description of a graphical representation of Web service-based applications and the mechanisms inserted among Web service requesters and providers to catch information to represent an application. The major contribution of this paper is to discus and use HTTP and SOAP information to show a graphical representation similar to a UML sequence diagram of Web service-based applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.