483 resultados para Streaming


Relevância:

10.00% 10.00%

Publicador:

Resumo:

As faculty needs evolve and become increasingly digital, libraries are feeling the pressure to provide relevant new services. At the same time, faculty members are struggling to create and maintain their professional reputations online. We at bepress are happy to announce the new SelectedWorks, the fully hosted, library-curated faculty profile platform that positions the library to better support faculty as well as the institution at large. Beverly Lysobey, Digital Commons and Resource Management Librarian, at Sacred Heart University, says: “Both faculty and administration have been impressed with the services we provide through SelectedWorks; we’re able to show how much our faculty really publishes, and it’s great for professors to get that recognition. We’ve had several faculty members approach us for help making sure their record was complete when they were up for tenure, and we’ve even found articles that authors themselves no longer had access to.” With consistent, organized, institution-branded profiles, SelectedWorks increases campus-wide exposure and supports the research mission of the university. As the only profile platform integrated with the fully hosted Digital Commons suite of publishing and repository services, it also ensures that the institution retains management of its content. Powerful integration with the Digital Commons platform lets the home institution more fully capture the range of scholarship produced on campus, and hosted services facilitate resource consolidation and reduces strain on IT. The new SelectedWorks features a modern, streamlined design that provides compelling display options for the full range of faculty work. It beautifully showcases streaming media, images, data, teaching materials, books – any type of content that researchers now produce as part of their scholarship. Detailed analytics tools let authors and librarians measure global readership and track impact for a variety of campus stakeholders: authors can see the universities, agencies, and businesses that are reading their work, and can easily export reports to use in tenure and promotion dossiers. Janelle Wertzbeger, Assistant Dean and Director of Scholarly Communications at Gettysburg College’s Musselman Library, says, “The new author dashboard maps and enhanced readership are SO GOOD. Every professor up for promotion & tenure should use them!” And of course, SelectedWorks is fully backed by the continual efforts of the bepress development team to provide maximum discoverability to search engines, increasing impact for faculty and institutions alike: Reverend Edward R. Udovic, Vice President for Teaching and Learning Resources at DePaul University, says, “In the last several months downloads of my scholarship from my [SelectedWorks] site have far surpassed the total distribution of all my work in the previous twenty five years.”

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Field lab: Entrepreneurial and innovative ventures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aims to understand how cells coordinate their motion during collective migration. As previously shown, the motion of individually migrating cells is governed by wave-like cell shape dynamics. The mechanisms that regulate these dynamic behaviors in response to extracellular environment remain largely unclear. I applied shape dynamics analysis to Dictyostelium cells migrating in pairs and in multicellular streams and found that wave-like membrane protrusions are highly coupled between touching cells. I further characterized cell motion by using principle component analysis (PCA) to decompose complex cell shape changes into a serial shape change modes, from which I found that streaming cells exhibit localized anterior protrusion, termed front narrowing, to facilitate cell-cell coupling. I next explored cytoskeleton-based mechanisms of cell-cell coupling by measuring the dynamics of actin polymerization. Actin polymerization waves observed in individual cells were significantly suppressed in multicellular streams. Streaming cells exclusively produced F-actin at cell-cell contact regions, especially at cell fronts. I demonstrated that such restricted actin polymerization is associated with cell-cell coupling, as reducing actin polymerization with Latrunculin A leads to the assembly of F-actin at the side of streams, the decrease of front narrowing, and the decoupling of protrusion waves. My studies also suggest that collective migration is guided by cell-surface interactions. I examined the aggregation of Dictyostelim cells under distinct conditions and found that both chemical compositions of surfaces and surface-adhesion defects in cells result in altered collective migration patterns. I also investigated the shape dynamics of cells suspended on PEG-coated surfaces, which showed that coupling of protrusion waves disappears on touching suspended cells. These observations indicate that collective migration requires a balance between cell-cell and cell-surface adhesions. I hypothesized such a balance is reached via the regulation of cytoskeleton. Indeed, I found cells actively regulate cytoskeleton to retain optimal cell-surface adhesions on varying surfaces, and cells lacking the link between actin and surfaces (talin A) could not retain the optimal adhesions. On the other hand, suspended cells exhibited enhanced actin filament assembly on the periphery of cell groups instead of in cell-cell contact regions, which facilitates their aggregation in a clumping fashion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This ruling proclaims that charges paid by a customer for streaming television programs, movies, music, and other similar content are charges for communication services and are therefore subject to South Carolina sales and use tax whether paid for as part of a subscription service, per item, or per event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops machine learning techniques to discover activities and contexts from pervasive sensor data. These techniques are especially suitable for streaming sensor data as they can infer the context space automatically. They are applicable in many real world applications such as activity monitoring or organization management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite significant advancements in wireless sensor networks (WSNs), energy conservation remains one of the most important research challenges. Recently, the problem of energy conservation has been addressed by applying mobile sink as an effective technique that can enhance efficiency of energy consumption in the networks. In this paper, the energy conservation problem is firstly formulated to maximize the lifetime of WSN subject to delay and node energy constraints. Then, to solve the defined energy conservation problem, a data collection scheduling with a mobile sink scheme is proposed. In the proposed approach, the sink movement is governed by a type-2 fuzzy controller to be located at the best location and time to collect sensory data. We conducted extensive experiments to study the effectiveness of the proposed protocol and compared it against the streaming data delivery (SDD) and virtual circle combined straight routing (VCCS) protocols. We observed that the proposed protocol outperforms both SDD and VCCS approaches by reducing energy consumption, minimize delays and enhance data collection quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Cultural Respect Encompassing Simulation Training (CREST) is a learning program that uses simulation to provide health professional students and practitioners with strategies to communicate sensitively with culturally and linguistically diverse (CALD) patients. It consists of training modules with a cultural competency evaluation framework and CALD simulated patients to interact with trainees in immersive simulation scenarios. The aim of this study was to test the feasibility of expanding the delivery of CREST to rural Australia using live video streaming; and to investigate the fidelity of cultural sensitivity - defined within the process of cultural competency which includes awareness, knowledge, skills, encounters and desire - of the streamed simulations. DESIGN AND METHODS: In this mixed-methods evaluative study, health professional trainees were recruited at three rural academic campuses and one rural hospital to pilot CREST sessions via live video streaming and simulation from the city campus in 2014. Cultural competency, teaching and learning evaluations were conducted. RESULTS: Forty-five participants rated 26 reliable items before and after each session and reported statistically significant improvement in 4 of 5 cultural competency domains, particularly in cultural skills (P<0.05). Qualitative data indicated an overall acknowledgement amongst participants of the importance of communication training and the quality of the simulation training provided remotely by CREST. CONCLUSIONS: Cultural sensitivity education using live video-streaming and simulation can contribute to health professionals' learning and is effective in improving cultural competency. CREST has the potential to be embedded within health professional curricula across Australian universities to address issues of health inequalities arising from a lack of cultural sensitivity training. Significance for public healthThere are significant health inequalities for migrant populations. They commonly have poorer access to health services and poorer health outcomes than the Australian-born population. The factors are multiple, complex and include language and cultural barriers. To address these disparities, culturally competent patient-centred care is increasingly recognised to be critical to improving care quality, patient satisfaction, patient compliance and patient outcomes. Yet there is a lack of quality in the teaching and learning of cultural competence in healthcare education curricula, particularly in rural settings where qualified trainers and resources can be limited. The Cultural Respect Encompassing Simulation Training (CREST) program offers opportunities to health professional students and practitioners to learn and develop communication skills with professionally trained culturally and linguistically diverse simulated patients who contribute their experiences and health perspectives. It has already been shown to contribute to health professionals' learning and is effective in improving cultural competency in urban settings. This study demonstrates that CREST when delivered via live video-streaming and simulation can achieve similar results in rural settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Attualmente, la maggior parte dei dati che transitano sulla rete appartiene a contenuti multimediali. Più nello specifico, è lo Streaming Video ad avere la predominanza nella condivisione di Internet; vista la crescita che tale servizio ha subìto negli ultimi anni, si sono susseguiti diversi studi volti allo sviluppo di tecniche e metodologie che potessero migliorarlo. Una di queste è sicuramente l'Adaptive Video Streaming, tecnica utilizzata per garantire all'utente una buona Quality of Experience (QoE) mediante l'utilizzo dei cosiddetti "algoritmi di rate adaptation". Il lavoro svolto in questi studi si è voluto concentrare su due filoni distinti, ma allo stesso tempo confrontabili: la prima parte della tesi riguarda lo sviluppo e l'analisi di alcuni algoritmi di rate adaptation per DASH, mentre la seconda è relativa all'implementazione di un nuovo algoritmo che li possa affiancare, migliorando la QoE nel monitorare lo stato della connessione. Si è quindi dovuta implementare un'applicazione Android per lo streaming video, che fosse conforme allo standard MPEG-DASH e potesse fornire le informazioni di testing da utilizzare per le analisi. La tesi è suddivisa in quattro capitoli: il primo introduce l'argomento e definisce la terminologia necessaria alla comprensione degli studi; il secondo descrive alcuni dei lavori correlati allo streaming adattivo e introduce i due filoni principali della tesi, ovvero gli algoritmi di rate adaptation e la proposta di algoritmo per la selezione dinamica del segmento; il terzo presenta l'app SSDash, utilizzata come mezzo per le analisi sperimentali; infine, il quarto ed ultimo capitolo mostra i risultati delle analisi e le corrispondenti valutazioni.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The popularity of cloud computing has led to a dramatic increase in the number of data centers in the world. The ever-increasing computational demands along with the slowdown in technology scaling has ushered an era of power-limited servers. Techniques such as near-threshold computing (NTC) can be used to improve energy efficiency in the post-Dennard scaling era. This paper describes an architecture based on the FD-SOI process technology for near-threshold operation in servers. Our work explores the trade-offs in energy and performance when running a wide range of applications found in private and public clouds, ranging from traditional scale-out applications, such as web search or media streaming, to virtualized banking applications. Our study demonstrates the benefits of near-threshold operation and proposes several directions to synergistically increase the energy proportionality of a near-threshold server.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A pesar de la existencia de una multitud de investigaciones sobre el análisis de sentimiento, existen pocos trabajos que traten el tema de su implantación práctica y real y su integración con la inteligencia de negocio y big data de tal forma que dichos análisis de sentimiento estén incorporados en una arquitectura (que soporte todo el proceso desde la obtención de datos hasta su explotación con las herramientas de BI) aplicada a la gestión de la crisis. Se busca, por medio de este trabajo, investigar cómo se pueden unir los mundos de análisis (de sentimiento y crisis) y de la tecnología (todo lo relacionado con la inteligencia de negocios, minería de datos y Big Data), y crear una solución de Inteligencia de Negocios que comprenda la minería de datos y el análisis de sentimiento (basados en grandes volúmenes de datos), y que ayude a empresas y/o gobiernos con la gestión de crisis. El autor se ha puesto a estudiar formas de trabajar con grandes volúmenes de datos, lo que se conoce actualmente como Big Data Science, o la ciencia de los datos aplicada a grandes volúmenes de datos (Big Data), y unir esta tecnología con el análisis de sentimiento relacionado a una situación real (en este trabajo la situación elegida fue la del proceso de impechment de la presidenta de Brasil, Dilma Rousseff). En esta unión se han utilizado técnicas de inteligencia de negocios para la creación de cuadros de mandos, rutinas de ETC (Extracción, Transformación y Carga) de los datos así como también técnicas de minería de textos y análisis de sentimiento. El trabajo ha sido desarrollado en distintas partes y con distintas fuentes de datos (datasets) debido a las distintas pruebas de tecnología a lo largo del proyecto. Uno de los datasets más importantes del proyecto son los tweets recogidos entre los meses de diciembre de 2015 y enero de 2016. Los mensajes recogidos contenían la palabra "Dilma" en el mensaje. Todos los twittees fueron recogidos con la API de Streaming del Twitter. Es muy importante entender que lo que se publica en la red social Twitter no se puede manipular y representa la opinión de la persona o entidad que publica el mensaje. Por esto se puede decir que hacer el proceso de minería de datos con los datos del Twitter puede ser muy eficiente y verídico. En 3 de diciembre de 2015 se aceptó la petición de apertura del proceso del impechment del presidente de Brasil, Dilma Rousseff. La petición fue aceptada por el presidente de la Cámara de los Diputados, el diputado Sr. Eduardo Cunha (PMDBRJ), y de este modo se creó una expectativa sobre el sentimiento de la población y el futuro de Brasil. También se ha recogido datos de las búsquedas en Google referentes a la palabra Dilma; basado en estos datos, el objetivo es llegar a un análisis global de sentimiento (no solo basado en los twittees recogidos). Utilizando apenas dos fuentes (Twitter y búsquedas de Google) han sido extraídos muchísimos datos, pero hay muchas otras fuentes donde es posible obtener informaciones con respecto de las opiniones de las personas acerca de un tema en particular. Así, una herramienta que pueda recoger, extraer y almacenar tantos datos e ilustrar las informaciones de una manera eficaz que ayude y soporte una toma de decisión, contribuye para la gestión de crisis.