435 resultados para analytics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In contrast to Muslins traditions and costumes, the US government and society seems to invest in the media to forge discourses on Western way of life. In addition, it creates idealized images of the woman, the hero, the father, the family, and an everyday speech invoking repeated and widespread moral values, including “justice” and “freedom”, in opposition to the “terror”. In this research we analysed the TV series Homeland, using as theoretical support the Cultural Studies, particularly the concept of Social Representation by Denise Jodelet, the analytics tools created by Michel Foucault on power devices, and feminist studies by Teresa of Lauretis. I’ve tried to see how forces in correlations operate, and how representations of womanhood, sexuality and nationality are built and reiterated in speeches, creating patterns of behaviour for men and women. Spreading images of the “good” man, the “good” wife, and the “hero”, the audio-visual product creates and produces the family, the society and the nation considered exemplar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discussion tools in existing LEs have few or no integrated tools to analyse student learning. This paper proposes tools not only for integrating social network analytics, but also why we need to semantically tag and track key concepts within posts in order to make student learning in discussions visible. This paper will argue for the importance of semantic markup in discussion tools using screenshots of existing LEs and UI mockups of semantically aware discussion tools to argue the case for this element of next generation LEs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the effect of credit rating agency’s (CRA) reputation on the discretionary disclosures of corporate bond issuers. Academics, practitioners, and regulators disagree on the informational role played by major CRAs and the usefulness of credit ratings in influencing investors’ perception of the credit risk of bond issuers. Using management earnings forecasts as a measure of discretionary disclosure, I find that investors demand more (less) disclosure from bond issuers when the ratings become less (more) credible. In addition, using content analytics, I find that bond issuers disclose more qualitative information during periods of low CRA reputation to aid investors better assess credit risk. That the corporate managers alter their voluntary disclosure in response to CRA reputation shocks is consistent with credit ratings providing incremental information to investors and reducing adverse selection in lending markets. Overall, my findings suggest that managers rely on voluntary disclosure as a credible mechanism to reduce information asymmetry in bond markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las bibliotecas universitarias recopilan, de manera rutinaria estadísticas sobre el uso de sus colecciones impresas y de la actividad in situ. Paralelamente y de manera sostenida, han ido incorporando recursos y servicios electrónicos, lo que ha motivado la elaboración de normas internacionales que definen indicadores que permiten medir su uso, no obstante contar con un software estándar es aún un asunto pendiente. Por otro lado, para medir la actividad de un sitio web existen varios programas gratuitos y de código abierto. Este trabajo tiene como objetivo determinar si los softwares de analítica web gratuitos para sitios web AWStats, Google Analytics y Piwik, pueden utilizarse para evaluar el uso de recursos y servicios electrónicos, conforme a los indicadores propuestos por las normas ANSI/NISO Z39.7-2013, ISO 2789:2003, ISO 20983:2003, BS ISO 11620:2008, EMIS, Counter e ICOLC. Para tales efectos, fueron utilizados para realizar el análisis de esta investigación sitio web y el catálogo en línea de la Biblioteca Florentino Ameghino, Biblioteca Central de la Facultad de Ciencias Naturales y Museo de la Universidad Nacional de la Plata, Argentina. Los resultados reflejan las características de los indicadores, el software y el caso de estudio. Estas características son abordadas en las conclusiones con el fin de darle contexto y perspectiva a la respuesta de la pregunta de si es viable medir el uso de recursos y servicios electrónicos de una biblioteca universitaria por medio de programas estadísticos para sitios web

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual cluster analysis provides valuable tools that help analysts to understand large data sets in terms of representative clusters and relationships thereof. Often, the found clusters are to be understood in context of belonging categorical, numerical or textual metadata which are given for the data elements. While often not part of the clustering process, such metadata play an important role and need to be considered during the interactive cluster exploration process. Traditionally, linked-views allow to relate (or loosely speaking: correlate) clusters with metadata or other properties of the underlying cluster data. Manually inspecting the distribution of metadata for each cluster in a linked-view approach is tedious, specially for large data sets, where a large search problem arises. Fully interactive search for potentially useful or interesting cluster to metadata relationships may constitute a cumbersome and long process. To remedy this problem, we propose a novel approach for guiding users in discovering interesting relationships between clusters and associated metadata. Its goal is to guide the analyst through the potentially huge search space. We focus in our work on metadata of categorical type, which can be summarized for a cluster in form of a histogram. We start from a given visual cluster representation, and compute certain measures of interestingness defined on the distribution of metadata categories for the clusters. These measures are used to automatically score and rank the clusters for potential interestingness regarding the distribution of categorical metadata. Identified interesting relationships are highlighted in the visual cluster representation for easy inspection by the user. We present a system implementing an encompassing, yet extensible, set of interestingness scores for categorical metadata, which can also be extended to numerical metadata. Appropriate visual representations are provided for showing the visual correlations, as well as the calculated ranking scores. Focusing on clusters of time series data, we test our approach on a large real-world data set of time-oriented scientific research data, demonstrating how specific interesting views are automatically identified, supporting the analyst discovering interesting and visually understandable relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente ensayo versa sobre la conversación escrita que mantienen los jóvenes hoy en sus chats. Se tratan las diferentes variedades lingüísticas. La escritura de los chats equivale a una conversación: los textos escritos se convierten en textos orales, las conversaciones se transcriben y las normas lingüísticas se rompen. Esto no significa que los jóvenes no sepan cuáles son, pero en los chats, no les interesan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El artículo analiza la figura del prosumidor desde los estudios visuales a partir de la combinación de la teoría de los actos de habla y los nuevos medios. El objetivo es evaluar si la distinción entre productores y consumidores, estrategias y tácticas de Michel de Certeau continúa siendo operativa en las interfaces gráficas de la cultura global de la información de Scott Lash. Para ello distingue dos tipos de performatividad de los actos de habla: la performatividad top-down del software, y la bottom-up de los juegos del lenguaje y las formas de vida. Estos tipos se aplican al análisis del discurso de los eslóganes que aparecen en los sitios web de las iniciativas “open” y de economía colaborativa, ya que las primeras están dedicadas a la producción de bienes inmateriales y las segundas a la producción de bienes materiales. El desarrollo muestra cómo los dos tipos de performatividad transforman el análisis textual de los estudios literarios y cinematográficos en una metodología capaz de investigar acciones materiales, humanas y no humanas. Las conclusiones describen el surgimiento de nuevas convenciones narrativas de poder y control ajenas a la ficción que apuntan a una “DIY society”.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the first bundle of core WP2 (user data analytics) client side components, including their specifications, usecases, and working prototypes. Included assets contain a description of their current status, and links to their full designs and downloadable versions. This deliverable only describes operational SW assets (even though beta) that are tested and documented. It should be noted, however, that various additional software assets (2.2d Cognitive Capacity Measurement and 2.3a Realtime Emotion Detection) are near completion for inclusion in games during the first pilot round. Those assets are still scheduled for inclusion in the final bundle deliverable D2.2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the first bundle of core WP2 (user data analytics) serverside components, including their specifications, usecases, and working prototypes. Included assets contain a description of their current status, and links to their full designs and downloadable versions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the core components to create customizable game analytics and dashboards: their present status; links to their full designs and downloadable versions; and how to configure them, and take advantage of the analytics visualizations and the underlying architecture of the platform. All the dashboard components are working with data collected using the xAPI data format that the RAGE project has developed in collaboration with ADL Co-Lab.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers, creates a challenge to extract significant user models from an educational perspective. In this paper we introduce a pattern detection mechanism with-in our data analytics tool based on k-means clustering and on SSE, silhouette, Dunn index and Xi-Beni index quality metrics. Experiments performed on a dataset obtained from our online e-learning platform show that the extracted interaction patterns were representative in classifying learners. Furthermore, the performed monitoring activities created a strong basis for generating automatic feedback to learners in terms of their course participation, while relying on their previous performance. In addition, our analysis introduces automatic triggers that highlight learners who will potentially fail the course, enabling tutors to take timely actions.