789 resultados para Android, Java, Asp.net, ASP.NET Web API 2, ASP.NET Identity 2.1, JWT
Resumo:
L’argomento centrale della tesi sono i centri sportivi, l’applicazione permette quindi all’utente di cercare un centro sportivo per nome, per città o per provincia. Consente inoltre di visualizzare la disponibilità per ogni campo offerto dalle strutture ed eventualmente di effettuare una prenotazione. Il centro sportivo renderà disponibili informazioni altrimenti difficilmente reperibili come gli orari, il numero telefonico, l’indirizzo, ecc.. Il progetto si compone di una parte front end e una parte back end. Il front consiste in un’applicazione android nativo (sviluppata in java). Il back-end invece vede un applicativo basato su ASP.NET Web API 2, con db Entity Framework Code First. Per la gestione degli user è stato scelto il framework ASP.NET Identity 2.1.
Resumo:
En este artículo se presentan una serie de reflexiones frente a las comparaciones que pueden hacerse entre dos plataformas de software: Java y .NET. Para ello se trata de hacer un breve recuento histórico de ambos casos, y después se presentan algunas de las diferencias que la autora ha encontrado entre ellas, mirando aspectos que tienen relación directa con la programación orientada a objetos, o con otros aspectos del lenguaje. Por último se presenta una breve aclaración, desde el punto de vista de la autora, frente al tema de portabilidad que ambos reclaman como la diferencia más relevante entre ellos.
Resumo:
Resumen tomado de la publicación. Se muestra una tabla con las diferencias entre Web 1.0 y Web 2.0. Aparecen logotipos de herramientas web
Resumo:
La tesi tratta la tematica delle web API implementate secondo i vincoli dello stile architetturale ReST e ne propone un esempio concreto riportando la progettazione delle API di un sistema di marcature realizzato in ambito aziendale.
Resumo:
Web APIs have gained increasing popularity in recent Web service technology development owing to its simplicity of technology stack and the proliferation of mashups. However, efficiently discovering Web APIs and the relevant documentations on the Web is still a challenging task even with the best resources available on the Web. In this paper we cast the problem of detecting the Web API documentations as a text classification problem of classifying a given Web page as Web API associated or not. We propose a supervised generative topic model called feature latent Dirichlet allocation (feaLDA) which offers a generic probabilistic framework for automatic detection of Web APIs. feaLDA not only captures the correspondence between data and the associated class labels, but also provides a mechanism for incorporating side information such as labelled features automatically learned from data that can effectively help improving classification performance. Extensive experiments on our Web APIs documentation dataset shows that the feaLDA model outperforms three strong supervised baselines including naive Bayes, support vector machines, and the maximum entropy model, by over 3% in classification accuracy. In addition, feaLDA also gives superior performance when compared against other existing supervised topic models.
Resumo:
The semantic web represents a current research effort to increase the capability of machines to make sense of content on the web. In this class, Peter Scheir will give a guest lecture on the basic principles underlying the semantic web vision, including RDF, OWL and other standards.
Resumo:
Web Science(2) Participant Information Sheet
Resumo:
As pessoas que gostam de comprar videojogos e livros, ao final de algum tempo, verificam que têm muitos destes itens armazenados e que já não os utilizam. Se estas pessoas não tiverem o intuito de criar uma coleção desses itens, irão, provavelmente se desfazer deles, por exemplo deitando-os fora. Neste contexto, apresenta-se uma aplicação para dispositivos móveis que possuam o sistema operativo Android, designada de XpressTrades. Esta aplicação visa resolver o problema descrito acima, tornando as trocas de jogos e de livros mais fácil, ajudando os seus utilizadores a reutilizarem os seus itens e a os utilizarem como moeda de troca. Juntamente com esta aplicação foi desenvolvida uma Web API, utilizando a framework ASP.NET, a qual é utilizada pela aplicação para esta poder funcionar. Embora este projeto de mestrado se tenha focado no desenvolvimento de uma aplicação especificamente para a troca de jogos e de livros, a aplicação foi desenhada e desenvolvida de forma modular e está preparada para ser estendida à troca de qualquer tipo de itens. A aplicação XpressTrades reúne diversas particularidades que tornarão as trocas de itens mais rápidas e eficientes. Algumas delas são: a apresentação da lista de proprietários ordenados por distância em relação ao utilizador e a apresentação de uma lista de itens recomendados com base no histórico de visualizações de itens realizadas pelo utilizador, ou seja, com base nos seus interesses. Relativamente à metodologia utilizada no desenvolvimento deste projeto, dado que a ideia surgiu do autor deste trabalho, recorreu-se primeiramente a inquéritos para se averiguar se as pessoas realmente revelavam interesse neste projeto e investigou-se também a existência de aplicações semelhantes. Seguidamente, utilizou-se a técnica de brainstorming para gerar as ideias e criou-se protótipos de baixa fidelidade para testar a interface de utilizador. Na fase de implementação, seguiu-se o seguinte ciclo para cada funcionalidade: prototipagem, testes com os utilizadores e correções dos erros detetados nos testes.
Resumo:
We compared daily net radiation (Rn) estimates from 19 methods with the ASCE-EWRI Rn estimates in two climates: Clay Center, Nebraska (sub-humid) and Davis, California (semi-arid) for the calendar year. The performances of all 20 methods, including the ASCE-EWRI Rn method, were then evaluated against Rn data measured over a non-stressed maize canopy during two growing seasons in 2005 and 2006 at Clay Center. Methods differ in terms of inputs, structure, and equation intricacy. Most methods differ in estimating the cloudiness factor, emissivity (e), and calculating net longwave radiation (Rnl). All methods use albedo (a) of 0.23 for a reference grass/alfalfa surface. When comparing the performance of all 20 Rn methods with measured Rn, we hypothesized that the a values for grass/alfalfa and non-stressed maize canopy were similar enough to only cause minor differences in Rn and grass- and alfalfa-reference evapotranspiration (ETo and ETr) estimates. The measured seasonal average a for the maize canopy was 0.19 in both years. Using a = 0.19 instead of a = 0.23 resulted in 6% overestimation of Rn. Using a = 0.19 instead of a = 0.23 for ETo and ETr estimations, the 6% difference in Rn translated to only 4% and 3% differences in ETo and ETr, respectively, supporting the validity of our hypothesis. Most methods had good correlations with the ASCE-EWRI Rn (r2 > 0.95). The root mean square difference (RMSD) was less than 2 MJ m-2 d-1 between 12 methods and the ASCE-EWRI Rn at Clay Center and between 14 methods and the ASCE-EWRI Rn at Davis. The performance of some methods showed variations between the two climates. In general, r2 values were higher for the semi-arid climate than for the sub-humid climate. Methods that use dynamic e as a function of mean air temperature performed better in both climates than those that calculate e using actual vapor pressure. The ASCE-EWRI-estimated Rn values had one of the best agreements with the measured Rn (r2 = 0.93, RMSD = 1.44 MJ m-2 d-1), and estimates were within 7% of the measured Rn. The Rn estimates from six methods, including the ASCE-EWRI, were not significantly different from measured Rn. Most methods underestimated measured Rn by 6% to 23%. Some of the differences between measured and estimated Rn were attributed to the poor estimation of Rnl. We conducted sensitivity analyses to evaluate the effect of Rnl on Rn, ETo, and ETr. The Rnl effect on Rn was linear and strong, but its effect on ETo and ETr was subsidiary. Results suggest that the Rn data measured over green vegetation (e.g., irrigated maize canopy) can be an alternative Rn data source for ET estimations when measured Rn data over the reference surface are not available. In the absence of measured Rn, another alternative would be using one of the Rn models that we analyzed when all the input variables are not available to solve the ASCE-EWRI Rn equation. Our results can be used to provide practical information on which method to select based on data availability for reliable estimates of daily Rn in climates similar to Clay Center and Davis.
Resumo:
Esta investigación busca analizar los factores de localización urbanos que se convierten en componentes importantes para establecer Parques Tecnológicos en Colombia como estrategia de competitividad y desarrollo territorial. Particularmente se ha enfocado en la configuración del Centro de Innovación Ruta N en la ciudad de Medellín en el marco de los procesos de competitividad para la ciudad durante el periodo 2007-2011.
Resumo:
I dagens näringsliv är effektiv kommunikation och informationsutbyte mellan företag en förutsättning för verksamheten. Näringslivet utmärks av förändring; företag köps upp, företag slås samman, företag samarbetar i projektform. Behovet av att integrera varandras informationssystem står i paritet med ovanstående förändringar. Ett stort problem med systemintegration är variationsrikedomen mellan informationssystemen, beträffande teknisk plattform och programspråk. Webservices erbjuder metoder att enkelt integrera olika informationssystem med varandra.I rapporten beskrivs hur webservices implementeras och vilka tekniska komponenter som ingår, samt de fördelar som webservicetekniken ger. Uppdraget från Sogeti, Borlänge var att designa och implementera en prototyp, i vilken klientapplikationer i Java och VB.NET integreras med varandra genom webservices i respektive programspråk. För analys och design har metoden UML använts. Slutsatsen av rapporten är att Java och VB.NET kan kommunicera med varandra genom webserviceteknik. Dock är integrationen mellan de två programspråken inte okomplicerad. Detta leder till slutsatsen att webservicetekniken måste standardiseras för att få ordentligt genomslag som teknik för systemintegration mellan olika programspråk.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
[ES] Una de las aplicaciones más interesantes de las nuevas tecnologías en la docencia, es la utilización de plataformas virtuales accesibles por el alumno a través de Internet. eKASI es una plataforma informática para el apoyo a la docencia presencial desarrollada en la Universidad del Pais Vasco, que permite la gestión de los documentos y la gestión de los estudiantes de un curso, a la vez que facilita el aprendizaje del alumno. Es una herramienta de distribución gratuita y de fácil manejo. Durante el curso 2005/2006, se ha utilizado esta plataforma como apoyo a la docencia de la asignatura Tecnología Farmacéutica I de 4º curso de la Licenciatura en Farmacia de la Universidad del País Vasco. La plataforma está accesible en la dirección de Internet http://ekasi.ehu.es mediante la introducción de una clave facilitada por el administrador del sistema. En la sección correspondiente al aula virtual de la plataforma, el alumno tiene a su disposición la información y documentos relacionados con la asignatura (plan docente, presentaciones utilizadas en las clases, cuestionarios de autoevaluación, enlaces de interés a paginas web, materiales multimedia, etc.). Por otra parte, esta plataforma permite la colaboración y discusión on line de los materiales estudiados, a través del foro y del correo electrónico y posibilita al profesor tutorizar y realizar un seguimiento del progreso de los estudiantes, mediante la realización de tests y de las diferentes tareas propuestas al grupo de alumnos. La plataforma eKASI ha supuesto un instrumento de gran utilidad como apoyo a la docencia presencial tal como se deduce de los resultados de la encuesta realizada a los alumnos al finalizar el curso académico.
Resumo:
[ES]El Trabajo Fin de Grado (TFG) es una asignatura de 12 créditos, que consiste en un proceso formativo con el que cada estudiante demostrará la adquisición de las competencias requeridas en el grado. El TFG supone «la realización por parte de cada estudiante y de forma individual de un proyecto, memoria o estudio original bajo la supervisión de uno o más directores, en el que se integren y desarrollen los contenidos formativos recibidos, capacidades, competencias y habilidades adquiridas durante el periodo de docencia del Grado» (art. 2, apart. 2.1 del decreto 2223/2011 del BOPV). El TFG se realizará en la fase final del plan de estudios y estará dirigido por una profesora o profesor perteneciente a alguno de los departamentos implicados en el Grado. Será un proyecto de formación, innovación o investigación, que podrá adquirir diversos formatos, y que deberá ser defendido ante un tribunal. Al igual que el resto de las materias, el TFG cuenta con una guía docente pública y común a todo el profesorado, con un calendario académico, con un horario público, y con seminarios y tutorías. El TFG contará con una evaluación continuada que valorará el proceso, una evaluación final que calificará el trabajo escrito y una evaluación de la defensa pública de dicho trabajo
Resumo:
La asignatura Investigación Operativa es una asignatura cuatrimestral dedicada fundamentalmente a la introducción de los modelos deterministas más elementales dentro de la investigación de operaciones. Esta asignatura se ha impartido en los últimos años en el tercer curso de la Licenciatura de Administración y Dirección de Empresas (L.A.D.E.) en la Facultad de Ciencias Económicas y Empresariales de la UPV/EHU. Esta publicación recoge los problemas resueltos propuestos en los exámenes de las distintas convocatorias entre los años 2005 y 2010. El temario oficial de la asignatura desglosado por temas es el siguiente: 1. Programación lineal entera: 1.1 Formulación de problemas de Programación Lineal Entera. 1.2 Método de ramificación y acotación (Branch and Bound). 1.3 Otros métodos de resolución. 2. Programación multiobjetivo y por metas: 2.1 Introducción a la Programación Multiobjetivo. 2.2 Programación por metas. 2.3 Programación por prioridades. 3. Modelos en redes: 3.1 Conceptos básicos. 3.2 Problema del árbol de expansión minimal. 3.3 Problema del camino más corto. 3.4 Problema del camino más largo. 3.5 Problema del flujo máximo. 3.6 Problema de asignación. 3.7 Planificación de Proyectos: Métodos C.P.M. y P.E.R.T.