430 resultados para SQL verktyg
Resumo:
Examen de segunda convocatoria del curso 2011-12. Consta de 3 partes: test y problemas de teoría, y SQL.
Resumo:
Studien undersöker och jämför hur fyra lärare arbetar medvetet med sin högläsning i klassrummet och ifall deras arbete förändras mellan årskurs 4 och årskurs 6. Undersökningen är disponerad som en multipel fallstudie, där varje enskilt fall först analyseras separat för att därpå korsanalyseras. För att kunna besvara studiens syfte och frågeställningar genomfördes en empirisk undersökning där de fyra fallen studerades genom metoderna enkät, observation, intervju och deltagarvalidering. Därefter analyserades fallen utifrån den sociokulturella teorin om lärande, skolans styrdokument och tidigare forskning inom ämnesområdet. Resultaten ger en positiv inblick i lärares arbete med högläsning i klassrummet. De didaktiska val som ligger bakom de deltagande lärarnas högläsning går att koppla till läroplanens syfte och centrala innehåll. Lärarna verkar arbeta varierat och eleverna tränas med högläsningen som utgångspunkt i läroplanens övergripande förmågor. Lärarna har alla märkt ett generellt minskat läsintresse hos barn och elever men anser sig överlag inte märka så mycket av detta i sina egna klassrum, vilket de kopplar till att de aktivt arbetar med att läsa högt för eleverna. Vidare anser de att elevernas språkutveckling och ordförståelse gynnas av högläsning då de får lyssna till det skrivna ordet. Studiens lärare är överens om att det är viktigt att avsätta tid till högläsning i undervisningen och använder sig av en dialogisk uppläsning i sina klassrum, där högläsningen blir ett pedagogiskt verktyg. Endast en av lärarna säger sig uttryckligen arbeta med specifika metoder och strategier utifrån ämnesdidaktisk forskning, men vid besöken observerades att även de övriga lärarna intuitivt arbetar implicit med lässtrategier i sin undervisning. Gemensamt för lärarna är att de ofta väljer att arbeta ämnesintegrerat och att högläsningen blir en naturlig del i ett tematiskt arbete. De menar också att behovet av deras högläsning i undervisningen snarast ökar i och med att eleverna blir äldre, då de möts av mer komplexa texter ju äldre de blir.
Resumo:
I studien undersöks huruvida en grupp narkotikasäljare på den svenska hemsidan Flugsvamp drivs av andra motivationer än ett grundläggande vinstintresse. Sidan bygger på de teknologiska verktyg som utvecklades av den libertarianskt inspirerade Cypherpunkrörelsens arbete med stark kryptografi i slutet av 1990-talet. Dessa verktyg skyddar både brottslingar och frihetskämpar, men har främst associerats med de som använder anonymiteten i kriminella syften och således har verktygen fått ett dåligt rykte som riskerar äventyra kryptografins framtid. Genom semistrukturerade onlineintervjuer med fem säljare av skiftande karaktär samlade jag information gällande deras upplevelse av sidan, relationerna till kunder och administratörer, hur de rent moraliskt rationaliserade sin verksamhet och vilken påverkan de såg sig ha på samhället och kryptografin. En bild framträdde där majoriteten av säljarna uppvisade ansvarskänsla och eftertänksamhet genom att agera med transparens och professionalitet. Genom att utkonkurrera oärliga aktörer och med ett nära kollegialt samarbete med varandra använde dessa säljare sin roll för att utmana den traditionella majoritetsinställningen till narkotika och visa hur den ansvarsfulla digitala försäljningen i längden kan vara samhällsförändrande. En minoritet uppgav att de endast använde sidan för att tjäna extrapengar och egentligen tyckte att det var en rimlig idé för staten att förbjuda de kryptoanarkistiska verktygen då de främst anses användas i kriminella syften. Studien når fram till slutsatserna att samhället som helhet tjänar på att det narkotikarelaterade våldet försvinner från gatorna, visar på hur en inkonsekvent och konservativ lagstiftning gör sig mottaglig för starka motargument samt understryker att den framtida digitala mänskligheten kommer ha en betydande användning för de kryptoanarkistiska verktygen.
Resumo:
Studien undersöker hur kvinnliga karaktärer representeras i relation till skräcktematiken i tv-serien Penny Dreadful (2014-). Syftet har varit att studera huruvida det som är typiska kännetecken för skräck kan kopplas till kvinnlighet, femininitet och feminism (det senare då man kan uppfatta ett genuskritiskt samtal i serien). Med hjälp av psykoanalytiska teorier kring abjektion visar analysen hur det som är skrämmande med kvinnor, är skrämmande på andra sätt än vad som är skrämmande med män. Det som är abjekt med kvinnan definieras ofta utifrån hennes sexualitet och biologiska egenskaper, och skapar därmed en feminin monstrositet och således är helt olik den manliga. Detta har till stor del växt fram genom historiska myter, religioner och konst, som har bidragit till könsspecifika monster utifrån stereotyp femininitet, så som häxor, sirener eller Medusa. Genom att utforska tv-seriens karaktärer med hjälp av semiotiska och psykoanalytiska verktyg avslöjas möjliga tolkningar som visar hur nämnda feminina monster tycks grunda sig i manlig rädsla och kvinnan som hot. Kastrationskomplexet som bidragande faktor och den manliga blicken tycks därför kväsa uttryck för kvinnlig frigörelse i serien, genom att sexualisera, plåga och göra kvinnan abjekt och monstruös i direkt genmäle till dessa. Serien tycks därför trots sin genuskritiska diskurs kontrolleras av en manlig blick och ett skoptofiliskt seende, något som möjligtvis bidrar till att kvinnlighet och femininitet kodas som abjekt, och i värsta fall stigmatiserar den feministiska kvinnan.
Resumo:
Myndigheten för tillgängliga medier ger ut tidningen 8 SIDOR med nyheter på lättläst svenska, som riktar sig till alla med lässvårigheter. Uppsatsens syfte är att ta reda på om läsare i målgruppen och skribenter till texter i tidningen har samma syn på vad som är en lättläst och begriplig text, och vad de tycker förenklar och försvårar en text. Tio elever i årskurs 7-8 i grundsärskolan och grundskolan har fått samtala i tre fokusgrupper om tre texter från tidningens webbsida. Samtalen har analyserats utifrån begreppet textrörlighet, som är ett verktyg som analyserar läsarens relation till texten. Textrörlighet delas upp i textbaserad rörlighet, rörlighet utåt och interaktiv rörlighet. De två journalister som har skrivit texterna har intervjuats om begreppet lättläst, mottagaranpassning och texterna. I många fall resonerar elever och journalister lika, men eleverna önskar tydligare rubriker och i viss mån ämnen som är mer anpassade till dem själva. Det som fungerar bäst för elevernas textbaserade rörlighet är när texten är logiskt uppbyggd och har lagom svåra ord samt en välvald bild. För textrörligheten utåt fungerar det bäst när eleverna har intresse och förkunskaper i ämnet. Den interaktiva rörligheten fungerar inte särskilt bra inför någon av texterna, vilket kan bero på ovana vid att läsa tidningsartiklar och/eller ovana vid att reflektera över texter. Slutsatsen är att en tydlig mottagaranpassning är viktig och ger resultat, men är svår att genomföra när tidningen har ett stort urval av ämnen och många olika målgrupper.
Resumo:
Online geographic information systems provide the means to extract a subset of desired spatial information from a larger remote repository. Data retrieved representing real-world geographic phenomena are then manipulated to suit the specific needs of an end-user. Often this extraction requires the derivation of representations of objects specific to a particular resolution or scale from a single original stored version. Currently standard spatial data handling techniques cannot support the multi-resolution representation of such features in a database. In this paper a methodology to store and retrieve versions of spatial objects at, different resolutions with respect to scale using standard database primitives and SQL is presented. The technique involves heavy fragmentation of spatial features that allows dynamic simplification into scale-specific object representations customised to the display resolution of the end-user's device. Experimental results comparing the new approach to traditional R-Tree indexing and external object simplification reveal the former performs notably better for mobile and WWW applications where client-side resources are limited and retrieved data loads are kept relatively small.
Resumo:
The Environmental Sciences Division within Queensland Environmental Protection Agency works to monitor, assess and model the condition of the environment. The Division has as a legislative responsibility to produce a whole-of-government report every four years dealing environmental conditions and trends in a ”State of the Environment report” (SoE)[1][2][3]. State of Environment Web Service Reporting System is a supplementary web service based SoE reporting tool, which aims to deliver accurate, timely and accessible information on the condition of the environment through web services via Internet [4][5]. This prototype provides a scientific assessment of environmental conditions for a set of environmental indicators. It contains text descriptions and tables, charts and maps with spatiotemporal dimensions to show the impact of certain environmental indicators on our environment. This prototype is a template based indicator system, to which the administrator may add new sql queries for new indicator services without changing the architecture and codes of this template. The benefits are brought through a service-oriented architecture which provides an online query service with seamless integration. In addition, since it uses web service architecture, each individual component within the application can be implemented by using different programming languages and in different operating systems. Although the services showed in this demo are built upon two datasets of regional ecosystem and protection area of Queensland, it will be possible to report on the condition of water, air, land, coastal zones, energy resources, biodiversity, human settlements and natural culture heritage on the fly as well. Figure 1 shows the architecture of the prototype. In the next section, I will discuss the research tasks in the prototype.
Resumo:
Many emerging applications benefit from the extraction of geospatial data specified at different resolutions for viewing purposes. Data must also be topologically accurate and up-to-date as it often represents real-world changing phenomena. Current multiresolution schemes use complex opaque data types, which limit the capacity for in-database object manipulation. By using z-values and B+trees to support multiresolution retrieval, objects are fragmented in such a way that updates to objects or object parts are executed using standard SQL (Structured Query Language) statements as opposed to procedural functions. Our approach is compared to a current model, using complex data types indexed under a 3D (three-dimensional) R-tree, and shows better performance for retrieval over realistic window sizes and data loads. Updates with the R-tree are slower and preclude the feasibility of its use in time-critical applications whereas, predictably, projecting the issue to a one-dimensional index allows constant updates using z-values to be implemented more efficiently.
Resumo:
Existing semantic search tools have been primarily designed to enhance the performance of traditional search technologies but with little support for ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by providing several means to hide the complexity of semantic search from end users and thus make it easy to use and effective.
Resumo:
A protein's isoelectric point or pI corresponds to the solution pH at which its net surface charge is zero. Since the early days of solution biochemistry, the pI has been recorded and reported, and thus literature reports of pI abound. The Protein Isoelectric Point database (PIP-DB) has collected and collated these data to provide an increasingly comprehensive database for comparison and benchmarking purposes. A web application has been developed to warehouse this database and provide public access to this unique resource. PIP-DB is a web-enabled SQL database with an HTML GUI front-end. PIP-DB is fully searchable across a range of properties.
Resumo:
The biggest threat to any business is a lack of timely and accurate information. Without all the facts, businesses are pressured to make critical decisions and assess risks and opportunities based largely on guesswork, sometimes resulting in financial losses and missed opportunities. The meteoric rise of Databases (DB) appears to confirm the adage that “information is power”, but the stark reality is that information is useless if one has no way to find what one needs to know. It is more accurate perhaps to state that, “the ability to find information is power”. In this paper we show how Instantaneous Database Access System (IDAS) can make a crucial difference by pulling data together and allowing users to summarise information quickly from all areas of a business organisation.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^
Resumo:
This dissertation established a software-hardware integrated design for a multisite data repository in pediatric epilepsy. A total of 16 institutions formed a consortium for this web-based application. This innovative fully operational web application allows users to upload and retrieve information through a unique human-computer graphical interface that is remotely accessible to all users of the consortium. A solution based on a Linux platform with My-SQL and Personal Home Page scripts (PHP) has been selected. Research was conducted to evaluate mechanisms to electronically transfer diverse datasets from different hospitals and collect the clinical data in concert with their related functional magnetic resonance imaging (fMRI). What was unique in the approach considered is that all pertinent clinical information about patients is synthesized with input from clinical experts into 4 different forms, which were: Clinical, fMRI scoring, Image information, and Neuropsychological data entry forms. A first contribution of this dissertation was in proposing an integrated processing platform that was site and scanner independent in order to uniformly process the varied fMRI datasets and to generate comparative brain activation patterns. The data collection from the consortium complied with the IRB requirements and provides all the safeguards for security and confidentiality requirements. An 1-MR1-based software library was used to perform data processing and statistical analysis to obtain the brain activation maps. Lateralization Index (LI) of healthy control (HC) subjects in contrast to localization-related epilepsy (LRE) subjects were evaluated. Over 110 activation maps were generated, and their respective LIs were computed yielding the following groups: (a) strong right lateralization: (HC=0%, LRE=18%), (b) right lateralization: (HC=2%, LRE=10%), (c) bilateral: (HC=20%, LRE=15%), (d) left lateralization: (HC=42%, LRE=26%), e) strong left lateralization: (HC=36%, LRE=31%). Moreover, nonlinear-multidimensional decision functions were used to seek an optimal separation between typical and atypical brain activations on the basis of the demographics as well as the extent and intensity of these brain activations. The intent was not to seek the highest output measures given the inherent overlap of the data, but rather to assess which of the many dimensions were critical in the overall assessment of typical and atypical language activations with the freedom to select any number of dimensions and impose any degree of complexity in the nonlinearity of the decision space.