937 resultados para Relational Databases
Resumo:
Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.
Resumo:
Large read-only or read-write transactions with a large read set and a small write set constitute an important class of transactions used in such applications as data mining, data warehousing, statistical applications, and report generators. Such transactions are best supported with optimistic concurrency, because locking of large amounts of data for extended periods of time is not an acceptable solution. The abort rate in regular optimistic concurrency algorithms increases exponentially with the size of the transaction. The algorithm proposed in this dissertation solves this problem by using a new transaction scheduling technique that allows a large transaction to commit safely with significantly greater probability that can exceed several orders of magnitude versus regular optimistic concurrency algorithms. A performance simulation study and a formal proof of serializability and external consistency of the proposed algorithm are also presented.^ This dissertation also proposes a new query optimization technique (lazy queries). Lazy Queries is an adaptive query execution scheme which optimizes itself as the query runs. Lazy queries can be used to find an intersection of sub-queries in a very efficient way, which does not require full execution of large sub-queries nor does it require any statistical knowledge about the data.^ An efficient optimistic concurrency control algorithm used in a massively parallel B-tree with variable-length keys is introduced. B-trees with variable-length keys can be effectively used in a variety of database types. In particular, we show how such a B-tree was used in our implementation of a semantic object-oriented DBMS. The concurrency control algorithm uses semantically safe optimistic virtual "locks" that achieve very fine granularity in conflict detection. This algorithm ensures serializability and external consistency by using logical clocks and backward validation of transactional queries. A formal proof of correctness of the proposed algorithm is also presented. ^
Resumo:
Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^
Resumo:
Component-based Software Engineering (CBSE) and Service-Oriented Architecture (SOA) became popular ways to develop software over the last years. During the life-cycle of a software system, several components and services can be developed, evolved and replaced. In production environments, the replacement of core components, such as databases, is often a risky and delicate operation, where several factors and stakeholders should be considered. Service Level Agreement (SLA), according to ITILv3’s official glossary, is “an agreement between an IT service provider and a customer. The agreement consists on a set of measurable constraints that a service provider must guarantee to its customers.”. In practical terms, SLA is a document that a service provider delivers to its consumers with minimum quality of service (QoS) metrics.This work is intended to assesses and improve the use of SLAs to guide the transitioning process of databases on production environments. In particular, in this work we propose SLA-Based Guidelines/Process to support migrations from a relational database management system (RDBMS) to a NoSQL one. Our study is validated by case studies.
Resumo:
What would a professional development experience rooted in the philosophy, principles, and practices of restorative justice look and feel like? This article describes how such a professional development project was designed to implement restorative justice principles and practices into schools in a proactive, relational and sustainable manner by using a comprehensive dialogic, democratic peacebuilding pedagogy. The initiative embodied a broad, transformative approach to restorative justice, grounded in participating educators’ identifying, articulating and applying personal core values. This professional development focused on diverse educators, their relationships, and conceptual understandings, rather than on narrow techniques for enhancing student understanding or changing student behaviour. Its core practice involved facilitated critical reflexive dialogue in a circle, organized around recognizing the impact of participants’ interactions on others, using three central, recurring questions: Am I honouring? Am I measuring? What message am I sending? Situated in the context of relational theory (Llewellyn, 2012), this restorative professional development approach addresses some of the challenges in implementing and sustaining transformative citizenship and peacebuilding pedagogies in schools. A pedagogical portrait of the rationale, design, and facilitation experience illustrates the theories, practices, and insights of the initiative, called Relationships First: Implementing Restorative Justice From the Ground Up.
Resumo:
This work is funded by NHMRC grant 1023197. Stacy Carter is funded by an NHMRC Career Development Fellowship 1032963.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Based on close examinations of instant message (IM) interactions, this chapter argues that an interactional sociolinguistic approach to computer-mediated language use could provide explanations for phenomena that previously could not be accounted for in computer-mediated discourse analysis (CMDA). Drawing on the theoretical framework of relational work (Locher, 2006), the analysis focuses on non-task oriented talk and its function in forming and establishing communication norms in the team, as well as micro-level phenomena, such as hesitation, backchannel signals and emoticons. The conclusions of this preliminary research suggest that the linguistic strategies used for substituting audio-visual signals are strategically used in discursive functions and have an important role in relational work
Resumo:
During the summer of 2016, Duke University Libraries staff began a project to update the way that research databases are displayed on the library website. The new research databases page is a customized version of the default A-Z list that Springshare provides for its LibGuides content management system. Duke Libraries staff made adjustments to the content and interface of the page. In order to see how Duke users navigated the new interface, usability testing was conducted on August 9th, 2016.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
The purpose of the study was to investigate the significance of the relational component of academic advisor training and development in the learning opportunities of the professional development program and the advisors’ evaluation score at Florida International University.
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
Many defining human characteristics including theory of mind, culture and language relate to our sociality, and facilitate the formation and maintenance of cooperative relationships. Therefore, deciphering the context in which our sociality evolved is invaluable in understanding what makes us unique as a species. Much work has emphasised group-level competition, such as warfare, in moulding human cooperation and sociality. However, competition and cooperation also occur within groups; and inter-individual differences in sociality have reported fitness implications in numerous non-human taxa. Here we investigate whether differential access to cooperation (relational wealth) is likely to lead to variation in fitness at the individual level among BaYaka hunter-gatherers. Using economic gift games we find that relational wealth: a) displays individual-level variation; b) provides advantages in buffering food risk, and is positively associated with body mass index (BMI) and female fertility; c) is partially heritable. These results highlight that individual-level processes may have been fundamental in the extension of human cooperation beyond small units of related individuals, and in shaping our sociality. Additionally, the findings offer insight in to trends related to human sociality found from research in other fields such as psychology and epidemiology.
Resumo:
Marketing and policy researchers seeking to increase the societal impact of their scholarship should engage directly with relevant stakeholders. For maximum societal effect, this engagement needs to occur both within the research process and throughout the complex process of knowledge transfer. A relational engagement approach to research impact is proposed as complementary and building upon traditional approaches. Traditional approaches to impact employ bibliometric measures and focus on the creation and use of journal articles by scholarly audiences, an important but incomplete part of the academic process. The authors suggest expanding the strategies and measures of impact to include process assessments for specific stakeholders across the entire course of impact: from the creation, awareness, and use of knowledge to societal impact. This relational engagement approach involves the co-creation of research with audiences beyond academia. The authors hope to begin a dialogue on the strategies researchers can make to increase the potential societal benefits of their research.
Resumo:
Background: There are a lack of reliable data on the epidemiology and associated burden and costs of asthma. We sought to provide the first UK-wide estimates of the epidemiology, healthcare utilisation and costs of asthma.
Methods: We obtained and analysed asthma-relevant data from 27 datasets: these comprised national health surveys for 2010-11, and routine administrative, health and social care datasets for 2011-12; 2011-12 costs were estimated in pounds sterling using economic modelling.
Results: The prevalence of asthma depended on the definition and data source used. The UK lifetime prevalence of patient-reported symptoms suggestive of asthma was 29.5 % (95 % CI, 27.7-31.3; n = 18.5 million (m) people) and 15.6 % (14.3-16.9, n = 9.8 m) for patient-reported clinician-diagnosed asthma. The annual prevalence of patient-reported clinician-diagnosed-and-treated asthma was 9.6 % (8.9-10.3, n = 6.0 m) and of clinician-reported, diagnosed-and-treated asthma 5.7 % (5.7-5.7; n = 3.6 m). Asthma resulted in at least 6.3 m primary care consultations, 93,000 hospital in-patient episodes, 1800 intensive-care unit episodes and 36,800 disability living allowance claims. The costs of asthma were estimated at least £1.1 billion: 74 % of these costs were for provision of primary care services (60 % prescribing, 14 % consultations), 13 % for disability claims, and 12 % for hospital care. There were 1160 asthma deaths.
Conclusions: Asthma is very common and is responsible for considerable morbidity, healthcare utilisation and financial costs to the UK public sector. Greater policy focus on primary care provision is needed to reduce the risk of asthma exacerbations, hospitalisations and deaths, and reduce costs.