312 resultados para Transact-SQL
Resumo:
This dissertation established a software-hardware integrated design for a multisite data repository in pediatric epilepsy. A total of 16 institutions formed a consortium for this web-based application. This innovative fully operational web application allows users to upload and retrieve information through a unique human-computer graphical interface that is remotely accessible to all users of the consortium. A solution based on a Linux platform with My-SQL and Personal Home Page scripts (PHP) has been selected. Research was conducted to evaluate mechanisms to electronically transfer diverse datasets from different hospitals and collect the clinical data in concert with their related functional magnetic resonance imaging (fMRI). What was unique in the approach considered is that all pertinent clinical information about patients is synthesized with input from clinical experts into 4 different forms, which were: Clinical, fMRI scoring, Image information, and Neuropsychological data entry forms. A first contribution of this dissertation was in proposing an integrated processing platform that was site and scanner independent in order to uniformly process the varied fMRI datasets and to generate comparative brain activation patterns. The data collection from the consortium complied with the IRB requirements and provides all the safeguards for security and confidentiality requirements. An 1-MR1-based software library was used to perform data processing and statistical analysis to obtain the brain activation maps. Lateralization Index (LI) of healthy control (HC) subjects in contrast to localization-related epilepsy (LRE) subjects were evaluated. Over 110 activation maps were generated, and their respective LIs were computed yielding the following groups: (a) strong right lateralization: (HC=0%, LRE=18%), (b) right lateralization: (HC=2%, LRE=10%), (c) bilateral: (HC=20%, LRE=15%), (d) left lateralization: (HC=42%, LRE=26%), e) strong left lateralization: (HC=36%, LRE=31%). Moreover, nonlinear-multidimensional decision functions were used to seek an optimal separation between typical and atypical brain activations on the basis of the demographics as well as the extent and intensity of these brain activations. The intent was not to seek the highest output measures given the inherent overlap of the data, but rather to assess which of the many dimensions were critical in the overall assessment of typical and atypical language activations with the freedom to select any number of dimensions and impose any degree of complexity in the nonlinearity of the decision space.
Resumo:
In his discussion - Database As A Tool For Hospitality Management - William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, O’Brien offers at the outset, “Database systems offer sweeping possibilities for better management of information in the hospitality industry. The author discusses what such systems are capable of accomplishing.” The author opens with a bit of background on database system development, which also lends an impression as to the complexion of the rest of the article; uh, it’s a shade technical. “In early 1981, Ashton-Tate introduced dBase 11. It was the first microcomputer database management processor to offer relational capabilities and a user-friendly query system combined with a fast, convenient report writer,” O’Brien informs. “When 16-bit microcomputers such as the IBM PC series were introduced late the following year, more powerful database products followed: dBase 111, Friday!, and Framework. The effect on the entire business community, and the hospitality industry in particular, has been remarkable”, he further offers with his informed outlook. Professor O’Brien offers a few anecdotal situations to illustrate how much a comprehensive data-base system means to a hospitality operation, especially when billing is involved. Although attitudes about computer systems, as well as the systems themselves have changed since this article was written, there is pertinent, fundamental information to be gleaned. In regards to the digression of the personal touch when a customer is engaged with a computer system, O’Brien says, “A modern data processing system should not force an employee to treat valued customers as numbers…” He also cautions, “Any computer system that decreases the availability of the personal touch is simply unacceptable.” In a system’s ability to process information, O’Brien suggests that in the past businesses were so enamored with just having an automated system that they failed to take full advantage of its capabilities. O’Brien says that a lot of savings, in time and money, went un-noticed and/or under-appreciated. Today, everyone has an integrated system, and the wise business manager is the business manager who takes full advantage of all his resources. O’Brien invokes the 80/20 rule, and offers, “…the last 20 percent of results costs 80 percent of the effort. But times have changed. Everyone is automating data management, so that last 20 percent that could be ignored a short time ago represents a significant competitive differential.” The evolution of data systems takes center stage for much of the article; pitfalls also emerge.
Resumo:
The improvement in living standards and the development of telecommunications have led to a large increase in the number of Internet users in China. It has been reported by China National Network Information Center that the number of Internet users in China has reached 33.7 million in 2001, ranting the country third in the world. This figure also shows that more and more Chinese residents have accepted the Internet and use it to obtain information and compete their travel planning. Milne and Ateljevic stated that the integration of computing and telecommunications would create a global information network based mostly on the Internet. The Internet, especially the World Wide Web, has had a great impact on the hospitality and tourism industry in recent years. The WWW plays an important role in mediating between customers and hotel companies as a place to acquire information acquisition and transact business.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
Resumo:
This work seeks to understand how trans men build their identities and live the transsexual experience in the relationships they establish daily onto “man” category. It could be observed that for it they engenders a specific gender transition in the midst of male transsexuality. Despite being under a complex amalgam of relations of exploitation and disciplinary domination, ways of being man are brokered for a living and entry into spaces where they are expelled for not conform the bodies that gender norms require. It is understood that gender transition is a process at the same time of organic and prosthetic body management and the assumption of your own identity. Thus, they build a politic of identity that creatively fixes a person's category as rights holder. The "transition" is therefore to transact from nonexistence to a place of humanity. This dissertation describes how this process takes place in the experiences of the speakers, observing the practices that bring out the male, front of class positions on the labor market, access to health, hormonization and own identity. Thereby, theories that fix them as expressing female masculinities or marginal to the hegemony do not find exactitude in their lives. The research methodologically started performing "multilocated ethnographies" that gave possibilities to in-depth interviews with 15 stakeholders from the Northeast, Midwest, Southeast and South of Brazil. Between 2014 and 2015, from the applying of network technique to the first dialogues in research, it was possible to build a participant observation by the trans men’s everyday life. Wherewith I was capable to behold their own private activities, as well as their public agency amid a trans activism collective in northeast, and the follow-up actions in which they were involved during the XII Encontro Nacional em Universidades de Diversidade Sexual e de Gênero (ENUDSG) held in Mossoró/RN. Therefore, the thesis engages to describe and understand the different ways of constructing trans male gender transitions in access to transsexuality and therefore a way of explaining their own trajectories in terms of people that exist as such, even though in the midst of narratives marked by emotions linked to "not live", to suffering and dehumanization.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Os objetivos do presente estudo foram: (i) analisar a evolução das características antropométricas e capacidades físicas ao longo de uma época desportiva (comparação entre sub-15, sub-17 e sub-19) e (ii) desenvolver uma plataforma informática para auxiliar o controlo e avaliação. No primeiro trabalho a amostra foi constituída por um total de 50 jogadores portugueses de elite Sub-15 (idade de 14.0 ± 0.1 anos; n=16), Sub-17 (idade de 15.6 ± 0.5 anos; n=14) e Sub-19 (idade de 17.2 ± 0.7 anos; n=20) e foram controlados em três (3) momentos de avaliação; após o período de preparação geral (avaliação pré época), após a 1ª fase do quadro competitivo (avaliação meio época) e após a 2ª fase do quadro competitivo (avaliação pós época). Para a análise antropométrica foi avaliada a altura, massa corporal, índice de massa corporal, massa muscular, massa gorda e perímetros do bicípite, tronco, abdómen, coxa e perna. Para a análise da capacidade física foi avaliada a resistência aeróbia, o trabalho desenvolvido pelos membros inferiores durante o salto vertical, a potência dos membros inferiores durante a corrida, a agilidade e a flexibilidade. Para as características antropométricas encontrou-se para os três (3) escalões um aumento na massa corporal e uma estabilização da massa gorda explicado pelo aumento da massa muscular. As capacidades físicas melhoraram para os três (3) escalões principalmente da avaliação da pré época para o meio da época existindo uma estagnação do meio da época para o final da época. Tendo por base o tempo despendido e a logística necessária aquando da recolha de dados para o primeiro trabalho, desenvolveu-se uma plataforma de forma a auxiliar e simplificar todo o processo. Utilizou-se o software Visual Studio 2013 da Microsoft® sendo usada a linguagem programática vb.net com auxílio a uma base de dados em SQL. Foi definido um diagrama lógico para agilização dos diferentes exercícios com a base de dados no qual se incluiu os testes de agilidade, resistência, flexibilidade, velocidade, altura de salto e análise antropométrica. O entendimento das variações existentes ao longo de uma época desportiva de acordo com o quadro competitivo de cada equipa, as suas idades de desenvolvimento e a inclusão das novas tecnologias para auxiliar o complexo processo de treino, podem contribuir como uma ferramenta fundamental na avaliação e controlo do treino desportivo.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
Der Zugang zu Datenbanken über die universelle Abfragesprache SQL stellt für Nicht-Spezialisten eine große Herausforderung dar. Als eine benutzerfreundliche Alternative wurden daher seit den 1970er-Jahren unterschiedliche visuelle Abfragesprachen (Visual Query Languages, kurz VQLs) für klassische PCs erforscht. Ziel der vorliegenden Arbeit ist es, eine generische VQL zu entwickeln und zu erproben, die eine gestenbasierte Exploration von Datenbanken auf Schema- und Instanzdatenebene für mobile Endgeräte, insbesondere Tablets, ermöglicht. Dafür werden verschiedene Darstellungsformen, Abfragestrategien und visuelle Hints für Fremdschlüsselbeziehungen untersucht, die den Benutzer bei der Navigation durch die Daten unterstützen. Im Rahmen einer Anforderungsanalyse erwies sich die Visualisierung der Daten und Beziehungen mittels einer platzsparenden geschachtelten NF2-Darstellung als besonders vorteilhaft. Zur Steuerung der Datenbankexploration wird eine geeignete Gestensprache, bestehend aus Stroke-, Multitouch- und Mid-Air-Gesten, vorgestellt. Das Gesamtkonzept aus Darstellung und Gestensteuerung wurde anhand des im Rahmen dieser Arbeit entwickelten GBXT-Prototyps auf seine reale Umsetzbarkeit hin, als plattformunabhängige Single-Page-Application für verschiedene mobile Endgeräte mittels JavaScript und HTML5/CSS3 untersucht.
Resumo:
Aplicação Web pode-se dizer que é uma aplicação que responde a pedidos do utilizador através de um Browser. As aplicações web orientados a objetos hoje em dia têm cada vez mais espaços nas empresas. A redução dos custos de operacionalidade e a gestão das informações, depende da visão e missão das empresas n desenvolvimento das aplicações web, enquanto uma aplicação desktop precisaria de outros meios para a sua manutenção e com maiores custos. A facilidade do acesso principalmente na ASA em que os nossos servidores estão espalhados pelos aeroportos internacionais em Cabo Verde, sem a necessidade de ser instalada num computador sendo o acesso feito apenas através do Browser. Há uma grande vantagem na atualização que só deve ser feita no servidor em vez de máquina a maquina, também como a sua escalabilidade. A aplicação escolhida para desenvolvimento vai ajudar na gestão da frota das viaturas do corpo de Bombeiros do Aeroporto Internacional Cesária Évora, designada de GestFrota, com programação em PHP, HTML, CSS, JQuery e SQL. O trabalho permite o preenchimento diário de check-list dos equipamentos, o registo das viaturas e a gestão de extintores entre outros.
Resumo:
This research aims to understand the factors that influence intention to online purchase of consumers, and to identify between these factors those that influence the users and the nonusers of electronic commerce. Thus, it is an applied, exploratory and descriptive research, developed in a quantitative model. Data collection was done through a questionnaire administered to a sample of 194 graduate students from the Centre for Applied Social Sciences of UFRN and data analysis was performed using descriptive statistics, confirmatory factorial analysis and simple and multiple linear regression analysis. The results of descriptive statistics revealed that respondents in general and users of electronic commerce have positive perceptions of ease of use, usefulness and social influence about buying online, and intend to make purchases on Internet over the next six months. As for the non-users of electronic commerce, they do not trust the Internet to transact business, have negative perceptions of risk and social influence over purchasing online, and does not intend to make purchases on Internet over the next six months. Through confirmatory factorial analysis six factors were set up: behavioral intention, perceived ease of use, perceived usefulness, perceived risk, trust and social influence. Through multiple regression analysis, was observed that all these factors influence online purchase intentions of respondents in general, that only the social influence does not influence the intention to continue buying on the Internet from users of electronic commerce, and that only trust and social influence affect the intention to purchase online from non-users of electronic commerce. Through simple regression analysis, was found that trust influences perceptions of ease of use, usefulness and risk of respondents in general and users of electronic commerce, and that trust does not influence the perceptions of risk of non-users of electronic commerce. Finally, it was also found that the perceived ease of use influences perceived usefulness of the three groups. Given this scenario, it was concluded that it is extremely important that organizations that work with online sales know the factors that influence consumers purchasing intentions in order to gain space in their market
Consumer perceived risk, risk reduction strategies and transaction intentions in online marketplaces
Resumo:
Even though online commerce has garnered vast academic interest during the recent years, theoretical grounds for consumer behavior online still remains ambiguous. Despite the globally rapid growth of online commerce, only a fraction of Internet browsers end up purchasing goods online. This is argued to be caused by the intangible and distant nature of the Internet, causing overwhelming perceived risks for consumers and negatively affecting transaction intentions. To combat perceived risks, consumers may actively or passively seek to relieve those risks to tolerable level. These risk reduction strategies refer to both institutional mechanisms as well as consumer risk reduction strategies. The objective of this thesis is to provide further understanding upon the relationships between consumer perceived risk, risk reduction strategies and transaction intentions in online marketplaces. To serve the objectives of the present thesis, a quantitative approach was chosen as the method for conducting empirical research. The data was collected with an online survey through discussion board, using a random sample approach. The proposed research model was examined with a set of hierarchical regression analyses. Results revealed several direct relationships as well as moderating interaction effects. The key finding of this thesis is that institutional risk reduction mechanisms significantly contribute to consumer perceived risks. These mechanisms have the potential to reduce perceived risks, and therefore may stimulate transaction intentions. Additionally, it was observed that risk reduction strategies moderate the relationship between intermediary provided risk relievers, consumer perceived risks and transaction intentions. Retailer related risk reduction strategies were also shown to enforce the effectiveness of payment methods; however feedback and monitoring mechanism was shown to have a diminishing effect of perceived risk only when consumers did not rely on product related risk reduction strategies. The present thesis also illustrates the importance of effective information search, as those consumers are more willing to transact as the perceived risks become less significant. For managerial purposes, the importance of well-functioning institutional mechanisms cannot be emphasized enough.