998 resultados para LINQ to SQL
Resumo:
L'objectiu és estudiar les característiques orientades a l'objecte de l'estàndard SQL: 1999 i posar-les a prova amb un producte comercial que les suporti.
Resumo:
El treball realitzat analitza la manera com l'estàndard SQL:1999 ha incorporat a la seva especificació una sèrie de característiques pròpies de la tecnologia d'orientació a l'objecte. D'aquesta manera, l'estàndard mira de proporcionar als sistemes de gestió de bases de dades algunes de les funcionalitats exigides als sistemes de tercera generació. Aquests sistemes han d'incorporar altres característiques, que estan recollides a l'article Third Generation Database System Manifesto.
Resumo:
La finalitat d'aquest treball és analitzar les noves característiques espacials que ofereix SQLServer 2008. A continuació, com cas pràctic, desenvolupar una aplicació per Windows Mobile on una sèrie de dispositius mòbils puguin indicar la seva ubicació i poder consultar en un mapa, mitjançant el servei en línia Google Maps, la ubicació de la resta de dispositius.
Resumo:
Aquest treball tracta sobre la creació d'un projecte de programari lliure, portada a terme des del seu inici fins a la seva conversió, en un projecte que conta amb el suport d'una comunitat d'usuaris. Específicament, la finalitat del projecte de programari lliure generat, és la creació d'una aplicació que sigui capaç de guiar de forma gràfica, ràpida i intuïtiva a l'usuari a través del procés de creació de consultes SQL per a la base de dades postgreSQL.
Resumo:
Tässä työssä tutkitaan tietovaraston latausprosessin kehittämisen nopeuttamista Mic-rosoft SQL Server 2008 -ympäristössä. Työn teoriaosuudet on tarkoitettu tukemaan sekä työn tutkimus- että käytännönosia. Aiheeseen liittyviä tutkimuksia käytiin läpi parhaiden latausprosessin kehittämiseen kuluvaa aikaa vähentävien tapojen selvittä-miseksi. Nykytutkimus keskittyy valmistajasta riippumattomien mallien kehittämiseen ja valmistajakohtaisen latausprosessin luomiseen näiden mallien pohjalta. Yleinen konsensus parhaan mallin suhteen kuitenkin puuttuu. Aiheeseen liittyvien tutkimusten pohjalta esitetään arkkitehtuuri, joka saattaisi tule-vaisuudessa vähentää latausprosessin kehittämiseen kuluvaa aikaa huomattavasti. Tästä arkkitehtuurista luotiin yksinkertaistettu versio sekä siihen pohjautuva sovellus nopeuttamaan latausprosessin kehittämistä Microsoftin ETL-työkalulla.
Resumo:
This thesis introduces heat demand forecasting models which are generated by using data mining algorithms. The forecast spans one full day and this forecast can be used in regulating heat consumption of buildings. For training the data mining models, two years of heat consumption data from a case building and weather measurement data from Finnish Meteorological Institute are used. The thesis utilizes Microsoft SQL Server Analysis Services data mining tools in generating the data mining models and CRISP-DM process framework to implement the research. Results show that the built models can predict heat demand at best with mean average percentage errors of 3.8% for 24-h profile and 5.9% for full day. A deployment model for integrating the generated data mining models into an existing building energy management system is also discussed.
Resumo:
En la actualidad, el uso de las tecnologías ha sido primordial para el avance de las sociedades, estas han permitido que personas sin conocimientos informáticos o usuarios llamados “no expertos” se interesen en su uso, razón por la cual los investigadores científicos se han visto en la necesidad de producir estudios que permitan la adaptación de sistemas, a la problemática existente dentro del ámbito informático. Una necesidad recurrente de todo usuario de un sistema es la gestión de la información, la cual se puede administrar por medio de una base de datos y lenguaje específico, como lo es el SQL (Structured Query Language), pero esto obliga al usuario sin conocimientos a acudir a un especialista para su diseño y construcción, lo cual se ve reflejado en costos y métodos complejos, entonces se plantea una pregunta ¿qué hacer cuando los proyectos son pequeñas y los recursos y procesos son limitados? Teniendo como base la investigación realizada por la universidad de Washington[39], donde sintetizan sentencias SQL a partir de ejemplos de entrada y salida, se pretende con esta memoria automatizar el proceso y aplicar una técnica diferente de aprendizaje, para lo cual utiliza una aproximación evolucionista, donde la aplicación de un algoritmo genético adaptado origina sentencias SQL válidas que responden a las condiciones establecidas por los ejemplos de entrada y salida dados por el usuario. Se obtuvo como resultado de la aproximación, una herramienta denominada EvoSQL que fue validada en este estudio. Sobre los 28 ejercicios empleados por la investigación [39], 23 de los cuales se obtuvieron resultados perfectos y 5 ejercicios sin éxito, esto representa un 82.1% de efectividad. Esta efectividad es superior en un 10.7% al establecido por la herramienta desarrollada en [39] SQLSynthesizer y 75% más alto que la herramienta siguiente más próxima Query by Output QBO[31]. El promedio obtenido en la ejecución de cada ejercicio fue de 3 minutos y 11 segundos, este tiempo es superior al establecido por SQLSynthesizer; sin embargo, en la medida un algoritmo genético supone la existencia de fases que amplían los rangos de tiempos, por lo cual el tiempo obtenido es aceptable con relación a las aplicaciones de este tipo. En conclusión y según lo anteriormente expuesto, se obtuvo una herramienta automática con una aproximación evolucionista, con buenos resultados y un proceso simple para el usuario “no experto”.
On Implementing Joins, Aggregates and Universal Quantifier in Temporal Databases using SQL Standards
Resumo:
A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.
Resumo:
This file is used to create the examples described in the lectures. It should be used by "sqlite3 < student-3.sql"
Resumo:
El presente trabajo de investigación se basa en la descripción y análisis de las diferentes rutas existentes para la internacionalización de empresas. Así mismo, se conocerá a mayor profundidad el concepto de internacionalización, sus ventajas e implicaciones y diferentes modelos que aplican a este concepto. De igual forma, este estudio muestra las diferencias estratégicas que pueden existir entre los sectores, teniendo en cuenta que Formesan S.A.S. pertenece al sector de la construcción y SQL Software S.A., al sector del software y tecnologías de la información y comunicaciones. A partir de la comparación y análisis del proceso de internacionalización de estas empresas, se puede evidenciar que no existe una ruta específica por la cual toman la decisión de internacionalizarse, sino que en muchos casos, éstas deciden realizarlo desde la búsqueda de oportunidades y la motivación que tengan los empresarios para que esto suceda. Si bien, la internacionalización no es un proceso que está ligado a un modelo teórico sino más bien está asociado a una experiencia dada por las condiciones y decisiones que el empresario decide tomar al querer hacer de su empresa una compañía global.
Resumo:
Background: Health care literature supports the development of accessible interventions that integrate behavioral economics, wearable devices, principles of evidence-based behavior change, and community support. However, there are limited real-world examples of large scale, population-based, member-driven reward platforms. Subsequently, a paucity of outcome data exists and health economic effects remain largely theoretical. To complicate matters, an emerging area of research is defining the role of Superusers, the small percentage of unusually engaged digital health participants who may influence other members. Objective: The objective of this preliminary study is to analyze descriptive data from GOODcoins, a self-guided, free-to-consumer engagement and rewards platform incentivizing walking, running and cycling. Registered members accessed the GOODcoins platform through PCs, tablets or mobile devices, and had the opportunity to sync wearables to track activity. Following registration, members were encouraged to join gamified group challenges and compare their progress with that of others. As members met challenge targets, they were rewarded with GOODcoins, which could be redeemed for planet- or people-friendly products. Methods: Outcome data were obtained from the GOODcoins custom SQL database. The reporting period was December 1, 2014 to May 1, 2015. Descriptive self-report data were analyzed using MySQL and MS Excel. Results: The study period includes data from 1298 users who were connected to an exercise tracking device. Females consisted of 52.6% (n=683) of the study population, 33.7% (n=438) were between the ages of 20-29, and 24.8% (n=322) were between the ages of 30-39. 77.5% (n=1006) of connected and active members met daily-recommended physical activity guidelines of 30 minutes, with a total daily average activity of 107 minutes (95% CI 90, 124). Of all connected and active users, 96.1% (n=1248) listed walking as their primary activity. For members who exchanged GOODcoins, the mean balance was 4,000 (95% CI 3850, 4150) at time of redemption, and 50.4% (n=61) of exchanges were for fitness or outdoor products, while 4.1% (n=5) were for food-related items. Participants were most likely to complete challenges when rewards were between 201-300 GOODcoins. Conclusions: The purpose of this study is to form a baseline for future research. Overall, results indicate that challenges and incentives may be effective for connected and active members, and may play a role in achieving daily-recommended activity guidelines. Registrants were typically younger, walking was the primary activity, and rewards were mainly exchanged for fitness or outdoor products. Remaining to be determined is whether members were already physically active at time of registration and are representative of healthy adherers, or were previously inactive and were incentivized to change their behavior. As challenges are gamified, there is an opportunity to investigate the role of superusers and healthy adherers, impacts on behavioral norms, and how cooperative games and incentives can be leveraged across stratified populations. Study limitations and future research agendas are discussed.
Resumo:
Este trabalho apresenta uma nova abordagem para avaliação automática de consultas SQL. Essa abordagem propõe uma solução para o desafio de estimular o aprendiz a aperfeiçoar a sua solução: buscando, além de uma resposta que retorna o resultado correto, uma consulta com complexidade próxima da solução ótima. Essa proposta pode ser utilizada em ambientes de educação a distancia ou na educação presencial em atividades de laboratório, incluindo as avaliações. A solução proposta tem como vantagens: (1) o aprendiz recebe um feedback instantâneo durante a atividade prática de programação, o qual permite ao aprendiz refatorar a sua solução em direção a uma solução ótima; (2) completa integração entre o ensino de conceitos de programação com exemplo de fragmentos de programas executáveis on-line; (3) monitoramento das atividades do aprendiz (quantos exemplos foram executados; em cada exercício quantas tentativas de execução foram feitas, etc). Este trabalho é um primeiro passo na direção de construção de um ambiente totalmente assistido (por exemplo com avaliação automática) para ensino da linguagem de programação SQL, onde o professor é liberado do árduo trabalho de correção de comandos SQL podendo realizar tarefas pedagógicas mais relevantes. O método, fundamentado em estatística e métricas da Engenharia de Software, pode ser adaptado para outras linguagens tais como Java e Pascal. Além disso, o LabSQL serve com um laboratório para experimentação de duas novas técnicas, uma de avaliação e outra de acompanhamento, que estão sendo pesquisadas em trabalhos em paralelos: (a) avaliação automática de questões conceituais discursivas, além de permitir as tradicionais perguntas objetivas, (b) método de acompanhamento através de montagem de uma rubrica de avaliação.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency`s technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Current development platforms for designing spoken dialog services feature different kinds of strategies to help designers build, test, and deploy their applications. In general, these platforms are made up of several assistants that handle the different design stages (e.g. definition of the dialog flow, prompt and grammar definition, database connection, or to debug and test the running of the application). In spite of all the advances in this area, in general the process of designing spoken-based dialog services is a time consuming task that needs to be accelerated. In this paper we describe a complete development platform that reduces the design time by using different types of acceleration strategies based on using information from the data model structure and database contents, as well as cumulative information obtained throughout the successive steps in the design. Thanks to these accelerations, the interaction with the platform is simplified and the design is reduced, in most cases, to simple confirmations to the “proposals” that the platform automatically provides at each stage. Different kinds of proposals are available to complete the application flow such as the possibility of selecting which information slots should be requested to the user together, predefined templates for common dialogs, the most probable actions that make up each state defined in the flow, different solutions to solve specific speech-modality problems such as the presentation of the lists of retrieved results after querying the backend database. The platform also includes accelerations for creating speech grammars and prompts, and the SQL queries for accessing the database at runtime. Finally, we will describe the setup and results obtained in a simultaneous summative, subjective and objective evaluations with different designers used to test the usability of the proposed accelerations as well as their contribution to reducing the design time and interaction.
Resumo:
El paradigma de procesamiento de eventos CEP plantea la solución al reto del análisis de grandes cantidades de datos en tiempo real, como por ejemplo, monitorización de los valores de bolsa o el estado del tráfico de carreteras. En este paradigma los eventos recibidos deben procesarse sin almacenarse debido a que el volumen de datos es demasiado elevado y a las necesidades de baja latencia. Para ello se utilizan sistemas distribuidos con una alta escalabilidad, elevado throughput y baja latencia. Este tipo de sistemas son usualmente complejos y el tiempo de aprendizaje requerido para su uso es elevado. Sin embargo, muchos de estos sistemas carecen de un lenguaje declarativo de consultas en el que expresar la computación que se desea realizar sobre los eventos recibidos. En este trabajo se ha desarrollado un lenguaje declarativo de consultas similar a SQL y un compilador que realiza la traducción de este lenguaje al lenguaje nativo del sistema de procesamiento masivo de eventos. El lenguaje desarrollado en este trabajo es similar a SQL, con el que se encuentran familiarizados un gran número de desarrolladores y por tanto aprender este lenguaje no supondría un gran esfuerzo. Así el uso de este lenguaje logra reducir los errores en ejecución de la consulta desplegada sobre el sistema distribuido al tiempo que se abstrae al programador de los detalles de este sistema.---ABSTRACT---The complex event processing paradigm CEP has become the solution for high volume data analytics which demand scalability, high throughput, and low latency. Examples of applications which use this paradigm are financial processing or traffic monitoring. A distributed system is used to achieve the performance requisites. These same requisites force the distributed system not to store the events but to process them on the fly as they are received. These distributed systems are complex systems which require a considerably long time to learn and use. The majority of such distributed systems lack a declarative language in which to express the computation to perform over incoming events. In this work, a new SQL-like declarative language and a compiler have been developed. This compiler translates this new language to the distributed system native language. Due to its similarity with SQL a vast amount of developers who are already familiar with SQL will need little time to learn this language. Thus, this language reduces the execution failures at the time the programmer no longer needs to know every single detail of the underlying distributed system to submit a query.