941 resultados para Content Analytics
Resumo:
A Table of Contents can be tweaked so that it picks up the content from only part of a file (such as an Appendix). This video shows you how to make such a change to a Table of Contents that is based upon Heading Styles. For best viewing Download the video.
Resumo:
Debido a la importancia del sector funerario en Colombia, el trabajo que se presenta a continuación, describe el sector funerario de acuerdo a tres temas de interés. El primer capítulo, comprende la descripción de la industria y sus servicios. El segundo capítulo analiza los indicadores de concentración y financieros en los años del 2000 al 2010. Finalmente, la tercera sección muestra los aspectos internacionales del sector como lo son las instituciones y asociaciones del gremio, la regulación y la innovación a nivel global.
Resumo:
Description of student generated content which you can choose to submit
Resumo:
Resources from the Singapore Summer School 2014 hosted by NUS. ws-summerschool.comp.nus.edu.sg
Resumo:
Getting content from server to client can be more complicated than we have discussed so far. This lecture discusses how caching and content delivery networks help to make the Web work.
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.
Resumo:
ITEM DESCRIPTION After producing reviews of A-level Chemistry content in 2007 and 2010, we have updated the document to reflect the changes which have been introduced for first teaching in September 2015. We will be working with our network of teachers locally to monitor the impacts of the changes on teaching and the student experience with a view to releasing an updated version in the summer of 2017. This will aim to provide insights for university staff regarding the experiences of incoming students who will have been in the first cohort to have studied the new specifications. We are grateful to the Royal Society of Chemistry for support for the final stages of compiling this report. If you spot any errors or omissions, please don't hesitate to contact us.
Resumo:
An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.
Resumo:
The incursion of Internet has created new forms of information and communication. As a result, today’s generation is culturally socialized by the influence of information and communication technologies in their various forms. This has generated a series of characteristics of social and cultural behaviour which are derivative of didactic, academic or recreational use. Nevertheless the use of the Internet from an early age represents not only a useful educational tool; it can constitute a great danger when it is used to access contents unsuitable for their adaptive development. Accordingly, it is necessary to study the legal regulation of internet content and to evaluate how such regulation may affect rights. Further, it is also important to study of the impact and use of this technological tool at level of the familiar unit, to understand better how it can suggest appropriate social mechanisms for the constructive use of Internet. The present investigation involves these two aspects with the purpose of uniting the legal and social perspective in a joint analysis that allows one more a more integral vision of this problem of great interest at the global level.
Resumo:
Smoking-related pictures and matched controls are useful tools in experimental tasks of attentional bias. Noteworthy the procedures used to produce and validate these pairs of pictures are poorly reported. This study aimed to describe the production and evidence of validity of a set of smoking-related pictures and their matched controls. Two studies were conducted to assess validity. An online internet-based survey was used to assess face validity of 12 pictures related to smoking behavior and 12 matched controls. All pictures were colored and were 95mm length x 130mm width. Participants were asked if the pictures were related or not to the smoking behavior and also rated how much each picture was related to smoking behavior. The second study investigated attentional bias in smokers (n = 47) and non-smokers (n = 50), and examined how they assessed all pictures in terms of pleasantness and the 12 smoking-related pictures in terms of relevance to their own smoking behavior. Craving was assessed before and after the experiment. Results indicate that this set of pictures are valid since smoking-related pictures were considered more related to smoking behavior compared to their matched controls. Moreover, smokers showed greater attentional bias for smoking-related pictures than non-smokers. Craving and relevance of the smoking-related pictures were higher in smokers than in non-smokers. Smokers considered smoking-related pictures them less unpleasant than non-smokers. These findings provide evidence of face and content validity of this set of pictures, which will be available to researchers, contributing to maximize the standardization of future investigations.
Resumo:
Resumen tomado de la publicación