925 resultados para Online Systems
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016
Resumo:
Otto-von Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
The Southern Ischia canyon system has been investigated in detail through Multibeam bathymetry and Sparker seismic data and has been put in the geological framework of the deep sea depositional systems off the Campania region. The geological and geomorphological characteristics of the canyon system have been also compared with the characters of the Mediterranean submarine canyons and with the deep sea depositional systems of the Tyrrhenian sea. The Southern Ischia canyon system engraves a narrow continental shelf from Punta Imperatore to Punta San Pancrazio, being limited southwestwards from the relict volcanic edifice of the Ischia Bank. It consists of twenty-two drainage axes, whose planimetric trending has been reconstructed in a sketch morphological map realized through the geological interpretation of Multibeam bathymetry. While the eastern boundary of the canyon system is controlled by extensional tectonics, being limited by a NE-SW trending (anti-Apenninic) normal fault, its western boundary is controlled by volcanism, due to the growth of the Ischia volcanic bank. Submarine gravitational instabilities also acted in relationships to the canyon system, allowing for the individuation of large-scale creeping at the sea bottom and hummocky deposits already interpreted as debris avalanche deposits. Quaternary marine seismic sequences have been reconstructed through a densely spaced seismic grid recorded through a Sparker multitip seismic source, allowing for a detailed observation of steep erosional slopes occurring on the southern flank of the island and related deep sea depositional systems. Important implications of this study will regard the coastal monitoring and beach nourishment of the southern flank of the island, being involved by a strong erosion of marine and coastal systems.
Resumo:
This is an abstract of a presented talk at the European Biotechnology Conference held in Latvia during 05–07 May 2016
Resumo:
In economics of information theory, credence products are those whose quality is difficult or impossible for consumers to assess, even after they have consumed the product (Darby & Karni, 1973). This dissertation is focused on the content, consumer perception, and power of online reviews for credence services. Economics of information theory has long assumed, without empirical confirmation, that consumers will discount the credibility of claims about credence quality attributes. The same theories predict that because credence services are by definition obscure to the consumer, reviews of credence services are incapable of signaling quality. Our research aims to question these assumptions. In the first essay we examine how the content and structure of online reviews of credence services systematically differ from the content and structure of reviews of experience services and how consumers judge these differences. We have found that online reviews of credence services have either less important or less credible content than reviews of experience services and that consumers do discount the credibility of credence claims. However, while consumers rationally discount the credibility of simple credence claims in a review, more complex argument structure and the inclusion of evidence attenuate this effect. In the second essay we ask, “Can online reviews predict the worst doctors?” We examine the power of online reviews to detect low quality, as measured by state medical board sanctions. We find that online reviews are somewhat predictive of a doctor’s suitability to practice medicine; however, not all the data are useful. Numerical or star ratings provide the strongest quality signal; user-submitted text provides some signal but is subsumed almost completely by ratings. Of the ratings variables in our dataset, we find that punctuality, rather than knowledge, is the strongest predictor of medical board sanctions. These results challenge the definition of credence products, which is a long-standing construct in economics of information theory. Our results also have implications for online review users, review platforms, and for the use of predictive modeling in the context of information systems research.
Lernmanagement-Systeme mit Konzept einsetzen – Lehrende und Studierende beim Online-Lernen begleiten
Resumo:
Um die Nutzung digitaler Medien in der Lehre zu erleichtern, hat die Hochschule Ostwestfalen-Lippe ein Konzept entwickelt, mit dem Lehrende durch wissenschaftliche und studentische „eTutoren“ und Studierende durch studentische „eMentoren“ bei der Nutzung digitaler Medien im Lehr-/Lernprozess unterstützt werden. Ein zentraler Bestandteil des Modells ist die Nutzung des Learning-Management Systems ILIAS. Im folgenden Beitrag werden auf der Basis einiger grundsätzlicher Überlegungen zur Veränderung der Hochschullehre durch digitale Medien (1) zunächst die Konzepte des eTutoring und eMentoring kurz vorgestellt (2) und dann erläutert, wie das 5-Stufen-Modell für Online-Kurse von Gilly Salmon (3) für die konkreten Bedingungen an der Hochschule OWL angepasst wurde und von den eTutoren und eMentoren zur Unterstützung von Lehrenden und Studierenden genutzt wird (4). Der Beitrag schließt mit einem Fazit der bisherigen Erfahrungen (5).
Resumo:
Participation Space Studies explore eParticipation in the day-to-day activities of local, citizen-led groups, working to improve their communities. The focus is the relationship between activities and contexts. The concept of a participation space is introduced in order to reify online and offline contexts where people participate in democracy. Participation spaces include websites, blogs, email, social media presences, paper media, and physical spaces. They are understood as sociotechnical systems: assemblages of heterogeneous elements, with relevant histories and trajectories of development and use. This approach enables the parallel study of diverse spaces, on and offline. Participation spaces are investigated within three case studies, centred on interviews and participant observation. Each case concerns a community or activist group, in Scotland. The participation spaces are then modelled using a Socio-Technical Interaction Network (STIN) framework (Kling, McKim and King, 2003). The participation space concept effectively supports the parallel investigation of the diverse social and technical contexts of grassroots democracy and the relationship between the case-study groups and the technologies they use to support their work. Participants’ democratic participation is supported by online technologies, especially email, and they create online communities and networks around their goals. The studies illustrate the mutual shaping relationship between technology and democracy. Participants’ choice of technologies can be understood in spatial terms: boundaries, inhabitants, access, ownership, and cost. Participation spaces and infrastructures are used together and shared with other groups. Non-public online spaces, such as Facebook groups, are vital contexts for eParticipation; further, the majority of participants’ work is non-public, on and offline. It is informational, potentially invisible, work that supports public outputs. The groups involve people and influence events through emotional and symbolic impact, as well as rational argument. Images are powerful vehicles for this and digital images become an increasingly evident and important feature of participation spaces throughout the consecutively conducted case studies. Collaboration of diverse people via social media indicates that these spaces could be understood as boundary objects (Star and Griesemer, 1989). The Participation Space Studies draw from and contribute to eParticipation, social informatics, mediation, social shaping studies, and ethnographic studies of Internet use.
Resumo:
International research with regard to the intended as well as to the unintended outcomes and effects of high-stakes testing shows that the impact of high-stakes tests has important consequences for the participants involved in the respective educational systems. The purpose of this special issue is to examine the implementation of high-stakes testing in different national school systems and to refer to the effects in view of the concept of Educational Governance. (DIPF/Orig.)
Resumo:
Dissertação de Mestrado, Direção e Gestão Hoteleira, Escola Superior de Gestão, Hotelaria e Turismo, Universidade do Algarve, 2016
Resumo:
CuO supported on CeO2 and Ce0.9X0.1O2, where X is Zr, La, Tb or Pr, were synthesized using nitrate precursors, giving rise ceria based materials with a small particle size which interact with CuO species generating a high amount of interfacial sites. The incorporation of cations to the ceria framework modifies the CeO2 lattice parameter, improving the redox behavior of the catalytic system. The catalysts were characterized by X-ray fluorescence spectrometry (XRFS), X-ray diffraction (XRD), high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, thermoprogrammed reduction with H2 (H2-TPR) and X-ray photoelectron spectroscopy (XPS). The catalysts were tested in the preferential oxidation of CO under a H2-rich stream (CO-PROX), reaching conversion values higher than 95% between 115 and 140 °C and being the catalyst with 6 wt.% of Cu supported on Ce0.9Zr0.1O2 (sample 6CUZRCE) the most active catalyst. The influence of the presence of CO2 and H2O was also studied simulating a PROX unit, taking place a decrease of the catalytic activity due to the inhibitor effect both CO2 and H2O.
Resumo:
The evolution of CRISPR–cas loci, which encode adaptive immune systems in archaea and bacteria, involves rapid changes, in particular numerous rearrangements of the locus architecture and horizontal transfer of complete loci or individual modules. These dynamics complicate straightforward phylogenetic classification, but here we present an approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR–cas loci into distinct classes, types and subtypes. The new classification retains the overall structure of the previous version but is expanded to now encompass two classes, five types and 16 subtypes. The relative stability of the classification suggests that the most prevalent variants of CRISPR–Cas systems are already known. However, the existence of rare, currently unclassifiable variants implies that additional types and subtypes remain to be characterized.
Resumo:
Ecological models written in a mathematical language L(M) or model language, with a given style or methodology can be considered as a text. It is possible to apply statistical linguistic laws and the experimental results demonstrate that the behaviour of a mathematical model is the same of any literary text of any natural language. A text has the following characteristics: (a) the variables, its transformed functions and parameters are the lexic units or LUN of ecological models; (b) the syllables are constituted by a LUN, or a chain of them, separated by operating or ordering LUNs; (c) the flow equations are words; and (d) the distribution of words (LUM and CLUN) according to their lengths is based on a Poisson distribution, the Chebanov's law. It is founded on Vakar's formula, that is calculated likewise the linguistic entropy for L(M). We will apply these ideas over practical examples using MARIOLA model. In this paper it will be studied the problem of the lengths of the simple lexic units composed lexic units and words of text models, expressing these lengths in number of the primitive symbols, and syllables. The use of these linguistic laws renders it possible to indicate the degree of information given by an ecological model.
Resumo:
The goal of this article is to build an abstract mathematical theory rather than a computational one of the process of transmission of ideology. The basis of much of the argument is Patten's Environment Theory that characterizes a system with its double environment (input or stimulus and output or response) and the existing interactions among them. Ideological processes are semiotic processes, and if in Patten's theory, the two environments are physical, in this theory ideological processes are physical and semiotic, as are stimulus and response.