761 resultados para Query
Resumo:
In his discussion - Database As A Tool For Hospitality Management - William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, O’Brien offers at the outset, “Database systems offer sweeping possibilities for better management of information in the hospitality industry. The author discusses what such systems are capable of accomplishing.” The author opens with a bit of background on database system development, which also lends an impression as to the complexion of the rest of the article; uh, it’s a shade technical. “In early 1981, Ashton-Tate introduced dBase 11. It was the first microcomputer database management processor to offer relational capabilities and a user-friendly query system combined with a fast, convenient report writer,” O’Brien informs. “When 16-bit microcomputers such as the IBM PC series were introduced late the following year, more powerful database products followed: dBase 111, Friday!, and Framework. The effect on the entire business community, and the hospitality industry in particular, has been remarkable”, he further offers with his informed outlook. Professor O’Brien offers a few anecdotal situations to illustrate how much a comprehensive data-base system means to a hospitality operation, especially when billing is involved. Although attitudes about computer systems, as well as the systems themselves have changed since this article was written, there is pertinent, fundamental information to be gleaned. In regards to the digression of the personal touch when a customer is engaged with a computer system, O’Brien says, “A modern data processing system should not force an employee to treat valued customers as numbers…” He also cautions, “Any computer system that decreases the availability of the personal touch is simply unacceptable.” In a system’s ability to process information, O’Brien suggests that in the past businesses were so enamored with just having an automated system that they failed to take full advantage of its capabilities. O’Brien says that a lot of savings, in time and money, went un-noticed and/or under-appreciated. Today, everyone has an integrated system, and the wise business manager is the business manager who takes full advantage of all his resources. O’Brien invokes the 80/20 rule, and offers, “…the last 20 percent of results costs 80 percent of the effort. But times have changed. Everyone is automating data management, so that last 20 percent that could be ignored a short time ago represents a significant competitive differential.” The evolution of data systems takes center stage for much of the article; pitfalls also emerge.
Resumo:
In their discussion - Database System for Alumni Tracking - by Steven Moll, Associate Professor and William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, Professors Moll and O’Brien initially state: “The authors describe a unique database program which was created to solve problems associated with tracking hospitality majors subsequent to graduation.” “…and please, whatever you do, keep in touch with your school; join an alum’ organization. It is a great way to engage the resources of your school to help further your career,” says Professor Claudia Castillo in addressing a group of students attending her Life after College seminar on 9/18/2009. This is a very good point and it is obviously germane to the article at hand. “One of the greatest strengths of a hospitality management school, a strength that grows with each passing year, is its body of alumni,” say the authors. “Whether in recruiting new students or placing graduates, whether in fund raising or finding scholarship recipients, whatever the task, the network of loyal alumni stands ready to help.” The caveat is the resources are only available if students and school, faculty and alumni can keep track of each other, say professors Moll and O’Brien. The authors want you to know that the practice is now considered essential to success, especially in the hospitality industry whereby the fluid nature of the industry makes networking de rigueur to accomplishment. “When the world was a smaller, slower place, it was fairly easy for graduates to keep track of each other; there weren't that many graduates and they didn't move that often,” say the authors. “Now the hospitality graduate enters an international job market and may move five times in the first four years of employment,” they expand that thought. In the contemporary atmosphere linking human resources from institution to marketplace is relatively easy to do. “How can an association keep track of its graduates? There are many techniques, but all of them depend upon adequate recordkeeping,” Moll and O’Brien answer their own query. “A few years ago that would have meant a group of secretaries; today it means a database system,” they say. Moll and O’Brien discuss the essentials of compiling/programming such a comprehensive data base; the body of information to include, guidelines on the problems encountered, and how to avoid the pitfalls. They use the Florida International University, Hospitality database as a template for their example.
Resumo:
In the discussion - Selection Of Students For Hotel Schools: A Comparative Study - by William Morgan, Professor, School of Hospitality Management at Florida International University, Morgan’s initial observation is: “Standards for the selection of students into schools of hospitality management around the world vary considerably when it comes to measuring attitudes toward the industry. The author discusses current standards and recommends some changes.” In addition to intellectual ability, Professor Morgan wants you to know that an intangible element such as attitude is an equally important consideration to students seeking curriculum and careers in the hospitality field. “…breaches in behavior or problems in the tourist employee encounter are often caused by attitudinal conditions which pre exist the training and which were not able to be totally corrected by the unfreezing, movement, and refreezing processes required in attitudinal change,” says Morgan. “…other than for some requirements for level or grade completed or marks obtained, 26 of the 54 countries sampled (48.1 percent) had no pre-selection process at all. Of those having some form of a selection process (in addition to grades), 14 schools in 12 countries (22.2 percent) had a formal admissions examination,” Professor Morgan empirically provides. “It was impossible, however, to determine the scope of this admissions examination as it might relate to attitude.” The attitude intangible is a difficult one to quantify. With an apparent sameness in hotels, restaurants, and their facilities the significant distinctions are to be found in their employees. This makes the selection process for both schools and employers a high priority. Moreover, can a student, or a prospective employee, overcome stereotypes and prejudices to provide a high degree of service in the hospitality industry? This query is an important element of this article. “If utilized in the hotel, technical, or trade school or in the hiring process at the individual facility, this [hiring] process would provide an opportunity to determine if the prospective student or worker is receptive to the training to be received,” advises Professor Morgan. “Such a student or worker is realistic in his aims and aspirations, ready in his ability to receive training, and responsive to the needs of the guest, often someone very different from himself in language, dress, or degree of creature comforts desired,” your author further counsels. Professor Morgan looks to transactional analysis, role playing, languages, and cross cultural education as playing significant roles in producing well intentioned and knowledgeable employees. He expands upon these concepts in the article. Professor Morgan holds The International Center of Glion, Switzerland in high regard and cites that program’s efforts to maintain relationships and provide graduates with ongoing attitudinal enlightenment programs.
Resumo:
Large read-only or read-write transactions with a large read set and a small write set constitute an important class of transactions used in such applications as data mining, data warehousing, statistical applications, and report generators. Such transactions are best supported with optimistic concurrency, because locking of large amounts of data for extended periods of time is not an acceptable solution. The abort rate in regular optimistic concurrency algorithms increases exponentially with the size of the transaction. The algorithm proposed in this dissertation solves this problem by using a new transaction scheduling technique that allows a large transaction to commit safely with significantly greater probability that can exceed several orders of magnitude versus regular optimistic concurrency algorithms. A performance simulation study and a formal proof of serializability and external consistency of the proposed algorithm are also presented.^ This dissertation also proposes a new query optimization technique (lazy queries). Lazy Queries is an adaptive query execution scheme which optimizes itself as the query runs. Lazy queries can be used to find an intersection of sub-queries in a very efficient way, which does not require full execution of large sub-queries nor does it require any statistical knowledge about the data.^ An efficient optimistic concurrency control algorithm used in a massively parallel B-tree with variable-length keys is introduced. B-trees with variable-length keys can be effectively used in a variety of database types. In particular, we show how such a B-tree was used in our implementation of a semantic object-oriented DBMS. The concurrency control algorithm uses semantically safe optimistic virtual "locks" that achieve very fine granularity in conflict detection. This algorithm ensures serializability and external consistency by using logical clocks and backward validation of transactional queries. A formal proof of correctness of the proposed algorithm is also presented. ^
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
Background As the use of electronic health records (EHRs) becomes more widespread, so does the need to search and provide effective information discovery within them. Querying by keyword has emerged as one of the most effective paradigms for searching. Most work in this area is based on traditional Information Retrieval (IR) techniques, where each document is compared individually against the query. We compare the effectiveness of two fundamentally different techniques for keyword search of EHRs. Methods We built two ranking systems. The traditional BM25 system exploits the EHRs' content without regard to association among entities within. The Clinical ObjectRank (CO) system exploits the entities' associations in EHRs using an authority-flow algorithm to discover the most relevant entities. BM25 and CO were deployed on an EHR dataset of the cardiovascular division of Miami Children's Hospital. Using sequences of keywords as queries, sensitivity and specificity were measured by two physicians for a set of 11 queries related to congenital cardiac disease. Results Our pilot evaluation showed that CO outperforms BM25 in terms of sensitivity (65% vs. 38%) by 71% on average, while maintaining the specificity (64% vs. 61%). The evaluation was done by two physicians. Conclusions Authority-flow techniques can greatly improve the detection of relevant information in EHRs and hence deserve further study.
Resumo:
Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles’ location and motion information, range queries on current and history data, and prediction of vehicles’ movement in the near future. To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
The Physical Rehabilitation services (PR) are of fundamental importance in combating the global epidemic of Traffic Accidents (TA). Considering the numerous physical and social consequences of the survivors, quality problems in access to PR are a hazard to recovery of victims. It is necessary to improve the management of quality of services, assessing priority dimensions and intervening in their causes, to ensure rehabilitation available in time and suitable conditions. This study aimed to identify barriers to access to rehabilitation considering the perception of TA victims and professionals. The aim is also to estimate the access to rehabilitation and their associated factors. This is a qualitative and quantitative study of exploratory nature developed in Natal / RN with semi-structured interviews with 19 health professionals and telephone survey to 155 victims of traffic accidents. To explore barriers to access the speeches were transcribed and analyzed using the Alceste software (version 4.9). During the interviews used the following guiding question: “What barriers hinder or prevent access to physical rehabilitation for victims of traffic accidents?”. The names of classes and axes resulting from Alceste was performed by ad hoc query to three external researchers with subsequent consensus of the most representative name of analysis. We conducted multivariate analysis of the influence of the variables of the accident, sociodemographic, clinical and assistance on access to rehabilitation. Associations with p <0.20 in the bivariate analysis were submitted to logistic regression, step by step, with p <0.05 and confidence interval (CI) of 95%. The main barriers identified were: “Bureaucratic regulation”, “Long time to start rehabilitation”, “No post-surgery referral” and “inefficiency of public services”. These barriers were divided into a theoretical model built from the cause-effect diagram, in which we observed that insufficient access to rehabilitation is the product of causes related to organizational structure, work processes, professional and patients. Was constructed two logistic regression models: “General access to rehabilitation” and “Access to rehabilitation to public service”. 51.6% of patients had access to rehabilitation, and 32.9% in public and 17.9% in the private sector. The regression model “General access to rehabilitation” included the variables Income (OR:3.7), Informal Employment (OR:0.11), Unemployment (OR:0.15), Perceived Need for PR (OR:10) and Referral (OR: 27.5). The model “Access to rehabilitation in the public service” was represented by the “Referral to Public Service” (OR: 23.0) and “Private Health Plan” (OR: 0.07). Despite the known influence of social determinants on access to health services, a situation difficult to control by the public administration, this study found that the organizational and bureaucratic procedures established in health care greatly determine access to rehabilitation. Access difficulties show the seriousness of the problem and the factors suggest the need for improvements in comprehensive care for TA survivors and avoid unnecessary prolongation of the suffering of the victims of this epidemic.
Resumo:
Oggi l’esigenza di permettere all’utente di riuscire a formulare interrogazioni su database a grafo in maniera più semplice e soprattutto intuitiva ha portato gli istituti di ricerca a proporre metodi di interrogazione visuale. Uno dei sistemi che sono stati proposti è GraphVista. Tale sistema si basa sull’idea di dividere l’interrogazione in due macro-fasi. La prima fase permette di avvicinarsi al risultato richiesto escludendo da successive analisi dati che sicuramente non potranno far parte del risultato finale, mentre la seconda fase permette all’utente di essere protagonista. Infatti ad esso è concessa la possibilità di manipolare a proprio piacimento i risultati ottenuti dalla prima fase al fine di “ricercare” da sé le informazioni desiderate. Il tutto viene supportato grazie a un’interfaccia intuitiva ed efficace che permette all’utente di navigare interattivamente all’interno di una base di dati a grafo. Le caratteristiche dell’interfaccia e la possibilità di formulare interrogazioni visuali fanno in modo che l’utente non necessiti di conoscere alla perfezione un linguaggio di query specifico. Nel corso di questa tesi viene descritto il sistema GraphVista e la tecnologia sulla quale si fonda. Infine, viene effettuata una comparazione sull’efficacia e la semplicità d’uso del sistema GraphVista rispetto alla formulazione di query tramite un linguaggio standard come SPARQL.
Resumo:
Esta pesquisa, intitulada “Promoção da cidadania pelas rádios comunitárias do ABCD Paulista, sob desafios e enfrentamentos políticos”, estuda 11 rádios comunitárias autorizadas pelo Ministério das Comunicações para funcionamento no Grande ABCD Paulista. Na região, cinco cidades das sete ali existentes abrigam rádios comunitárias, como Diadema (rádios “Navegantes” e “Nova Diadema”); Mauá (rádios “Mauá” e “Z”); Ribeirão Pires (rádio “Pérola da Serra”); Rio Grande da Serra (rádio “Esplanada”) e São Bernardo do Campo (rádios “Lírio dos Vales”, “Nova Riacho”, “Paraty”, “Princesa” e “Represa”). As outras duas cidades daquele território, Santo André e São Caetano do Sul, não registram emissoras comunitárias autorizadas para funcionamento. O objetivo deste estudo é o de revelar o perfil das mencionadas emissoras; a contribuição que oferecem aos processos da promoção de cidadania e inclusão social; seus problemas operacionais estruturais para sobrevivência e reações para superação. A metodologia utilizada consiste em pesquisa bibliográfica, pesquisa documental, entrevistas, visitas às rádios e estudo de programação. Estudou-se o histórico da região; os conceitos de cidadania; participação; radiodifusão comunitária e a própria trajetória das emissoras. Na sequência, houve a consulta em instituições oficiais para o conhecimento das rádios comunitárias autorizadas para funcionamento no ABCD. Posteriormente, seguiu-se a pesquisa com várias visitas de observação. As entrevistas tiveram características semiestruturadas com os radialistas e demais depoentes para este trabalho, especialistas na presente temática. Concluiu-se que existem inúmeras dificuldades que as 11 emissoras comunitárias do ABCD Paulista enfrentam para conseguir manter as rádios funcionamento. A manutenção das dificuldades se dá principalmente pela força da legislação responsável por tal segmento radiofônico comunitário, que o impede de obter apoio comercial e patrocínios.
Resumo:
OBJECTIVE: The Thrombolysis in Myocardial Infarction (TIMI) score is a validated tool for risk stratification of acute coronary syndrome. We hypothesized that the TIMI risk score would be able to risk stratify patients in observation unit for acute coronary syndrome. METHODS: STUDY DESIGN: Retrospective cohort study of consecutive adult patients placed in an urban academic hospital emergency department observation unit with an average annual census of 65,000 between 2004 and 2007. Exclusion criteria included elevated initial cardiac biomarkers, ST segment changes on ECG, unstable vital signs, or unstable arrhythmias. A composite of significant coronary artery disease (CAD) indicators, including diagnosis of myocardial infarction, percutaneous coronary intervention, coronary artery bypass surgery, or death within 30 days and 1 year, were abstracted via chart review and financial record query. The entire cohort was stratified by TIMI risk scores (0-7) and composite event rates with 95% confidence interval were calculated. RESULTS: In total 2228 patients were analyzed. Average age was 54.5 years, 42.0% were male. The overall median TIMI risk score was 1. Eighty (3.6%) patients had 30-day and 119 (5.3%) had 1-year CAD indicators. There was a trend toward increasing rate of composite CAD indicators at 30 days and 1 year with increasing TIMI score, ranging from a 1.2% event rate at 30 days and 1.9% at 1 year for TIMI score of 0 and 12.5% at 30 days and 21.4% at 1 year for TIMI ≥ 4. CONCLUSIONS: In an observation unit cohort, the TIMI risk score is able to risk stratify patients into low-, moderate-, and high-risk groups.
Resumo:
Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.