977 resultados para GUI legacy Windows Form web-application
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
The paper describes three software packages - the main components of a software system for processing and web-presentation of Bulgarian language resources – parallel corpora and bilingual dictionaries. The author briefly presents current versions of the core components “Dictionary” and “Corpus” as well as the recently developed component “Connection” that links both “Dictionary” and “Corpus”. The components main functionalities are described as well. Some examples of the usage of the system’s web-applications are included.
Resumo:
Background: Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results: We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion: ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.
Resumo:
In questo progetto di tesi vengono descritte le fasi che hanno portato alla creazione di una web application per automatizzare campagne pubblicitarie attraverso Google Shopping. Partendo dall'analisi dello stato dell'arte del web advertising, vengono successivamente trattate le fasi di progettazione ed implementazione. Infine vengono valutati i risultati ottenuti e discussi i possibili sviluppi futuri.
Resumo:
IT-järjestelmillä on tärkeä rooli organisaation liiketoiminnassa. Koska organisaation liiketoimintavaatimukset ja strategia muuttuvat ympäröivän maailman mukaan, täytyy järjestelmän arkkitehtuurin sopeutua vallitsevaan tilanteeseen sekä mahdollisiin muutoksiin lyhyellä ja pitkällä aikavälillä. Modernin web-sovelluksen arkkitehtuuri sopeutuu organisaation liiketoiminnan haasteisiin. Erityisesti hallinnolliseksi ongelmaksi organisaatiossa muodostuvat Windows-sovellukset, koska niiden ylläpito sitoo henkilöresursseja ja niiden käyttökonteksti on rajallinen. Tästä syystä organisaatiot ovat käyneet etsimään ratkaisuja kuinka korvata Windows-sovellukset web-sovelluksilla. Kustannustehokas ratkaisu on modernisoida Windows-sovelluksen käyttöliittymä web-sovellukseksi. Tämän diplomityön tavoitteena oli laatia Logica Suomi Oy yritykselle viitearkkitehtuuri Win-dows-sovelluksen käyttöliittymän modernisoimiseksi web-sovellukseksi. Työ suoritettiin Proof of Concept projektissa, jossa modernisointiin Logican pääkäyttäjäsovellus. Työn tarkoituksena oli tunnistaa laajalti käytetyt arkkitehtuurimallit ja menetelmät jotka mahdollistavat modernisoinnin toteutuksen. Lisäksi tarkoitus oli tunnistaa menetelmät ja ohjelmistot jotka mahdollistavat kustannustehokkaan ja laadukkaan web-sovelluksen kehittämisen ja toteuttamisen. Työn osatavoitteena oli laatia modernisoitavan pääkäyttäjäsovelluksen kokonaisarkkitehtuuri. Työn tuloksena saatiin viitearkkitehtuuri jota voidaan käyttää ja hyödyntää ohjelmistokehitysprojekteissa, asiakkaan dokumentaatiossa, myynnissä ja markkinoinnissa. Viitearkkitehtuurissa on esitelty modernit web-teknologiat joilla on mahdollista toteuttaa web-sovellus jonka käyttökokemus vastaa Windows-sovellusta. Lisäksi tuloksena saatiin pääkäyttäjäsovelluksen kokonaisarkkitehtuuri, jonka tärkeimpiä tuloksia ovat modernisoinnin tavoitetila ja sovellusarkkitehtuuri. Tärkeimpiä jatkotoimenpiteitä ovat viitearkkitehtuuriin pohjautuvan modernisointiviitekehyksen laadinta sekä modernisointiprojektin arviointiin käytettävien mittareiden määrittely, suunnittelu ja toteutus. Relevanttien mittareiden avulla voidaan todeta, vastaako modernisoitu sovellus organisaation liiketoimintavaatimuksia ja strategiaa.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
En aquesta memòria s’explica el desenvolupament d’una eina útil que permet a l’usuari visualitzar en l’aplicació Google Maps les dades de posicionament captats en una sessió GPS. En aquest projecte, hem dissenyat una aplicació Web en la qual recollim les dades ingressades per l’usuari mitjançant un formulari. Un cop emmagatzemades aquestes dades en el servidor, la nostra eina hi executa l’aplicació encarregada del càlcul de les posicions. Aquesta és un script desenvolupat en MATLAB, que s’encarrega d’interpretar les dades subministrades per l’usuari, amb les quals es poden calcular les coordenades captades pel receptor GPS. Una vegada calculades, el software les emmagatzema en el servidor, en un arxiu .xml, que serà el que posteriorment interpretarà Google Maps gràcies al seu API. D’aquesta manera, l’usuari obtindrà el resultat visual de la sessió GPS que hagi decidit carregar sense necessitat des disposar de cap software específic per a la interpretació i el càlcul de les dades que hi ha capturat.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
In today's society it is becoming more and more important with direct marketing. Some of the direct marketing is done through e-mail, in which companies see an easy way to advertise himself. I did this thesis work at WebDoc Systems. They have a product that creates web documents directly in your browser, also called CMS. The CMS has a module for sending mass e-mail, but this module does not function properly and WebDoc Systems customers are dissatisfied with that part of the product. The problem with the module was that sometimes it didn't send the e-mail, and that it was not possible to obtain some form of follow-up information on the e-mail. The goal of this work was to develop a Web service that could easily send e-mail to many receivers, just as easily be able to view statistics on how mailing has gone. The first step was to do a literature review to get a good picture of available programming platforms, but also to be able create a good application infrastructure. The next step was to implement this design and improve it over time by using an iterative development methodology. The result was an application infrastructure that consists of three main parts and a plugin interface. The parts that were implemented were a Web service application, a Web application and a Windows service application. The three elements cooperate with each other and share a database, and plugins.
Resumo:
In the last years there was an exponential growth in the offering of Web-enabled distance courses and in the number of enrolments in corporate and higher education using this modality. However, the lack of efficient mechanisms that assures user authentication in this sort of environment, in the system login as well as throughout his session, has been pointed out as a serious deficiency. Some studies have been led about possible biometric applications for web authentication. However, password based authentication still prevails. With the popularization of biometric enabled devices and resultant fall of prices for the collection of biometric traits, biometrics is reconsidered as a secure remote authentication form for web applications. In this work, the face recognition accuracy, captured on-line by a webcam in Internet environment, is investigated, simulating the natural interaction of a person in the context of a distance course environment. Partial results show that this technique can be successfully applied to confirm the presence of users throughout the course attendance in an educational distance course. An efficient client/server architecture is also proposed. © 2009 Springer Berlin Heidelberg.
Resumo:
Die vorliegende Forschungsarbeit siedelt sich im Dreieck der Erziehungswissenschaften, der Informatik und der Schulpraxis an und besitzt somit einen starken interdisziplinären Charakter. Aus Sicht der Erziehungswissenschaften handelt es sich um ein Forschungsprojekt aus den Bereichen E-Learning und Multimedia Learning und der Fragestellung nach geeigneten Informatiksystemen für die Herstellung und den Austausch von digitalen, multimedialen und interaktiven Lernbausteinen. Dazu wurden zunächst methodisch-didaktische Vorteile digitaler Lerninhalte gegenüber klassischen Medien wie Buch und Papier zusammengetragen und mögliche Potentiale im Zusammenhang mit neuen Web 2.0-Technologien aufgezeigt. Darauf aufbauend wurde für existierende Autorenwerkzeuge zur Herstellung digitaler Lernbausteine und bestehende Austauschplattformen analysiert, inwieweit diese bereits Web 2.0-Technologien unterstützen und nutzen. Aus Sicht der Informatik ergab sich aus der Analyse bestehender Systeme ein Anforderungsprofil für ein neues Autorenwerkzeug und eine neue Austauschplattform für digitale Lernbausteine. Das neue System wurde nach dem Ansatz des Design Science Research in einem iterativen Entwicklungsprozess in Form der Webapplikation LearningApps.org realisiert und stetig mit Lehrpersonen aus der Schulpraxis evaluiert. Bei der Entwicklung kamen aktuelle Web-Technologien zur Anwendung. Das Ergebnis der Forschungsarbeit ist ein produktives Informatiksystem, welches bereits von tausenden Nutzern in verschiedenen Ländern sowohl in Schulen als auch in der Wirtschaft eingesetzt wird. In einer empirischen Studie konnte das mit der Systementwicklung angestrebte Ziel, die Herstellung und den Austausch von digitalen Lernbausteinen zu vereinfachen, bestätigt werden. Aus Sicht der Schulpraxis liefert LearningApps.org einen Beitrag zur Methodenvielfalt und zur Nutzung von ICT im Unterricht. Die Ausrichtung des Werkzeugs auf mobile Endgeräte und 1:1-Computing entspricht dem allgemeinen Trend im Bildungswesen. Durch die Verknüpfung des Werkzeugs mit aktuellen Software-Entwicklungen zur Herstellung von digitalen Schulbüchern werden auch Lehrmittelverlage als Zielgruppe angesprochen.
Resumo:
In his in uential article about the evolution of the Web, Berners-Lee [1] envisions a Semantic Web in which humans and computers alike are capable of understanding and processing information. This vision is yet to materialize. The main obstacle for the Semantic Web vision is that in today's Web meaning is rooted most often not in formal semantics, but in natural language and, in the sense of semiology, emerges not before interpretation and processing. Yet, an automated form of interpretation and processing can be tackled by precisiating raw natural language. To do that, Web agents extract fuzzy grassroots ontologies through induction from existing Web content. Inductive fuzzy grassroots ontologies thus constitute organically evolved knowledge bases that resemble automated gradual thesauri, which allow precisiating natural language [2]. The Web agents' underlying dynamic, self-organizing, and best-effort induction, enable a sub-syntactical bottom up learning of semiotic associations. Thus, knowledge is induced from the users' natural use of language in mutual Web interactions, and stored in a gradual, thesauri-like lexical-world knowledge database as a top-level ontology, eventually allowing a form of computing with words [3]. Since when computing with words the objects of computation are words, phrases and propositions drawn from natural languages, it proves to be a practical notion to yield emergent semantics for the Semantic Web. In the end, an improved understanding by computers on the one hand should upgrade human- computer interaction on the Web, and, on the other hand allow an initial version of human- intelligence amplification through the Web.
Resumo:
La creación de esta aplicación web empresarial surge con la necesidad de optimizar el tiempo en el proceso de creación de una campaña publicitaria de email marketing. El objetivo principal de este trabajo es automatizar el proceso de validación de los campos de un formulario web. Un formulario web [6] es un documento digital en el que los usuarios introducen sus datos personales como nombre, apellido, dirección, documento de identidad, entre otros. Estos datos posteriormente serán procesados y almacenados en un base de datos para luego ser enviados al anunciante. El proceso de validación se refiere a la programación del formulario web en la parte del cliente usando tecnologías web como JavaScript y HTML5, para controlar que los datos introducidos por el usuario en el formulario, sean correctos. Cada campo de un formulario web tiene una validación específica que depende de varios factores, como son el país de lanzamiento de la campaña y el campo a validar. De esta forma dependiendo del tipo de validación se genera un fichero JavaScript con todas las validaciones de dicho formulario. Una de las finalidades de este trabajo es que cualquier usuario de la empresa pueda programar un formulario web, sin tener conocimientos previos de programación, ya que la programación se realiza de forma transparente al usuario. Este es un resumen básico de la aplicación web, sin embargo se debe tener en cuenta una serie de requisitos y parámetros para hacerlo más eficiente y personalizable dependiendo de las necesidades del producto final de cada campaña publicitaria. Todos estos aspectos se explicaran en detalle en los siguientes apartados. Este trabajo se realizó en el corporativo Media Response Group, para la empresa Canalmail S.L, situada en Alcobendas, supervisado por los tutores profesionales Daniel Paz y Jorge Lázaro Molina y por el tutor académico Rafael Fernández Gallego de la Universidad Politécnica de Madrid.---ABSTRACT---The creation of this enterprise Web application arises from the need to optimize the time in the process of creating an online advertising campaign. The main objective of this work is to automate the process of validating fields in a web form. A web form [6] is a digital document that users enter data such as name, surname, address, ID number among others. These data will subsequently be processed and stored in a database and then be sent to the client. These data will subsequently be processed and stored in a database and then be sent to the advertiser. This validation process refers to programming the online form on the client‟s side using web technologies such as JavaScript, HTML5 to control that the data entered by the user in this form are correct. Each field in a web form has a specific validation that depends on several factors; like being a nationwide launch of the campaign and validating data, thus depending on the type of validation a JavaScript file is generated with all validation web form. This file is integrated into the web form by calling the service. One purpose of this work is that any business user can program a web form, without prior knowledge of web programming, since programming is transparent to the user. This is a basic summary of the web application; however we must consider a number of requirements and parameters to make it more efficient and customizable depending on the needs of the end product of each advertising campaign. All these aspects are explained in detail in the following sections. This work was performed in the corporate Media Response Group, for the company Canalmail S.L, located in Alcobendas, supervised by professional tutors Daniel Paz and Jorge Lázaro Molina and PhD Assistant Lecturer at Universidad Politécnica de Madrid. Rafael Fernández Gallego.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores