833 resultados para Existential analytics
Resumo:
La Loi constitutionnelle de 1867 ne contient aucune disposition expresse concernant un quelconque pouvoir pour les gouvernements fédéral et provinciaux de conclure des traités internationaux - ce pouvoir étant réservé, à l'époque de l'adoption de la Loi constitutionnelle de 1867, au pouvoir impérial britannique. Aussi, une seule disposition prévoyait les modalités de mise en oeuvre des traités impériaux au sein de la fédération canadienne et cette disposition est aujourd'hui caduque. Puisque l'autonomie du Canada face à l'Empire britannique ne s'est pas accompagnée d'une refonte en profondeur du texte de la constitution canadienne, rien n'a été expressément prévu concernant le droit des traités au sein de la fédération canadienne. Le droit constitutionnel touchant les traités internationaux est donc Ie fruit de la tradition du «constitutionnalisme organique» canadien. Cette thèse examine donc ce type de constitutionnalisme à travers le cas particulier du droit constitutionnel canadien relatif aux traités internationaux. Elle examine ce sujet tout en approfondissant les conséquences juridiques du principe constitutionnel du fédéralisme reconnu par la Cour suprême du Canada dans le Renvoi relatif à la sécession du Québec, [1998] 2 R.C.S. 217. De manière plus spécifique, cette thèse analyse en détail l’affaire Canada (P.G.) c. Ontario (P. G.), [1937] A.C. 326 (arrêt des conventions de travail) ou le Conseil prive a conclu que si l'exécutif fédéral peut signer et ratifier des traités au nom de l'État canadien, la mise en oeuvre de ces traités devra se faire - lorsqu'une modification législative est nécessaire à cet effet - par le palier législatif compétent sur la matière visée par l'obligation internationale. Le Conseil Prive ne spécifia toutefois pas dans cet arrêt qui a compétence pour conclure des traités relatifs aux matières de compétence provinciale. Cette thèse s'attaque donc à cette question. Elle défend la position selon laquelle aucun principe ou règle de droit constitutionnel canadien ou de droit international n'exige que l'exécutif fédéral ait un pouvoir plénier et exclusif sur la conclusion des traités. Elle souligne de plus que de très importants motifs de politique publique fondes notamment sur les impératifs d'expertise, de fonctionnalité institutionnelle et de démocratie militent à l’encontre d'un tel pouvoir fédéral plénier et exclusif. L'agencement institutionnel des différentes communautés existentielles présentes au Canada exige une telle décentralisation. Cette thèse démontre de plus que les provinces canadiennes sont les seules à posséder un pouvoir constitutionnel de conclure des traités portant sur des domaines relevant de leurs champs de compétence - pouvoir dont elles peuvent cependant déléguer l'exercice au gouvernement fédéral. Enfin, cette thèse analyse de manière systématique et approfondie les arguments invoques au soutien d'un renversement des principes établis par l'arrêt des conventions de travail en ce qui concerne la mise en oeuvre législative des traités relatifs à des matières provinciales et elle démontre leur absence de fondement juridique. Elle démontre par ailleurs que, compte tenu de l'ensemble des règles et principes constitutionnels qui sous-tendent et complètent le sens de cette décision, renverser l’arrêt des conventions de travail aurait pour effet concret de transformer l'ensemble de la fédération canadienne en état quasi unitaire car le Parlement pourrait alors envahir de manière permanente et exclusive l'ensemble des champs de compétence provinciaux. Cette conséquence est assurément interdite par le principe du fédéralisme constitutionnellement enchâssé.
Resumo:
Objectives: An email information literacy program has been effective for over a decade at Université de Montréal’s Health Library. Students periodically receive messages highlighting the content of guides on the library’s website. We wish to evaluate, using Google Analytics, the effects of the program on specific webpage statistics. Using the data collected, we may pinpoint popular guides as well as others that need improvement. Methods: In the program, first and second-year medical (MD) or dental (DMD) students receive eight bi-monthly email messages. The DMD mailing list also includes graduate students and professors. Enrollment to the program is optional for MDs, but mandatory for DMDs. Google Analytics (GA) profiles have been configured for the libraries websites to collect visitor statistics since June 2009. The GA Links Builder was used to design unique links specifically associated with the originating emails. This approach allowed us to gather information on guide usage, such as the visitor’s program of study, duration of page viewing, number of pages viewed per visit, as well as browsing data. We also followed the evolution of clicks on GA unique links over time, as we believed that users may keep the library's emails and refer to them to access specific information. Results: The proportion of students who actually clicked the email links was, on average, less than 5%. MD and DMD students behaved differently regarding guide views, number of pages visited and length of time on the site. The CINAHL guide was the most visited for DMD students whereas MD students consulted the Pharmaceutical information guide most often. We noted that some students visited referred guides several weeks after receiving messages, thus keeping them for future reference; browsing to additional pages on the library website was also frequent. Conclusion: The mitigated success of the program prompted us to directly survey students on the format, frequency and usefulness of messages. The information gathered from GA links as well as from the survey will allow us to redesign our web content and modify our email information literacy program so that messages are more attractive, timely and useful for students.
Resumo:
With the present research, we investigated effects of existential threat on veracity judgments. According to several meta-analyses, people judge potentially deceptive messages of other people as true rather than as false (so-called truth bias). This judgmental bias has been shown to depend on how people weigh the error of judging a true message as a lie (error 1) and the error of judging a lie as a true message (error 2). The weight of these errors has been further shown to be affected by situational variables. Given that research on terror management theory has found evidence that mortality salience (MS) increases the sensitivity toward the compliance of cultural norms, especially when they are of focal attention, we assumed that when the honesty norm is activated, MS affects judgmental error weighing and, consequently, judgmental biases. Specifically, activating the norm of honesty should decrease the weight of error 1 (the error of judging a true message as a lie) and increase the weight of error 2 (the error of judging a lie as a true message) when mortality is salient. In a first study, we found initial evidence for this assumption. Furthermore, the change in error weighing should reduce the truth bias, automatically resulting in better detection accuracy of actual lies and worse accuracy of actual true statements. In two further studies, we manipulated MS and honesty norm activation before participants judged several videos containing actual truths or lies. Results revealed evidence for our prediction. Moreover, in Study 3, the truth bias was increased after MS when group solidarity was previously emphasized.
Resumo:
In this lecture for a second year interdisciplinary course (part of the curriculum innovation programme) We explore the scope of social media analytics and look at two aspects in depth: Analysing for influence (looking at factors such as network structure, propagation of content and interaction), and analysing for trust (looking at different methods including policy, provenance and reputation - both local and global). The lecture notes include a number of short videos, which cannot be included here for copy-write reasons.
Resumo:
Wednesday 26th March 2014 Speaker(s): Dr Trung Dong Huynh Organiser: Dr Tim Chown Time: 26/03/2014 11:00-11:50 Location: B32/3077 File size: 349Mb Abstract Understanding the dynamics of a crowdsourcing application and controlling the quality of the data it generates is challenging, partly due to the lack of tools to do so. Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer their quality. It can also reveal the processes that led to a data item and the interactions of contributors with it. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. In this talk, I will present an application-independent methodology for analysing provenance graphs, constructed from provenance records, to learn about such patterns and to use them for assessing some key properties of crowdsourced data, such as their quality, in an automated manner. I will also talk about CollabMap (www.collabmap.org), an online crowdsourcing mapping application, and show how we applied the approach above to the trust classification of data generated by the crowd, achieving an accuracy over 95%.
Resumo:
Resources from the Singapore Summer School 2014 hosted by NUS. ws-summerschool.comp.nus.edu.sg
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.
Resumo:
Resumen tomado de la publicación
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
The concept of being ‘patient-centric’ is a challenge to many existing healthcare service provision practices. This paper focuses on the issue of referrals, where multiple stakeholders, i.e. general practitioners and patients, are encouraged to make a consensual decision based on patient needs. In this paper, we present an ontology-enabled healthcare service provision, which facilitates both patients and GPs in jointly deciding upon the referral decision. In the healthcare service provision model, we define three types of profile, which represents different stakeholders’ requirements. This model also comprises of a set of healthcare service discovery processes: articulating a service need, matching the need with the healthcare service offerings, and deciding on a best-fit service for acceptance. As a result, the healthcare service provision can carry out coherent analysis using personalised information and iterative processes that deal with requirements change over time.
Resumo:
This paper discusses how global financial institutions are using big data analytics within their compliance operations. A lot of previous research has focused on the strategic implications of big data, but not much research has considered how such tools are entwined with regulatory breaches and investigations in financial services. Our work covers two in-depth qualitative case studies, each addressing a distinct type of analytics. The first case focuses on analytics which manage everyday compliance breaches and so are expected by managers. The second case focuses on analytics which facilitate investigation and litigation where serious unexpected breaches may have occurred. In doing so, the study focuses on the micro/data to understand how these tools are influencing operational risks and practices. The paper draws from two bodies of literature, the social studies of information systems and finance to guide our analysis and practitioner recommendations. The cases illustrate how technologies are implicated in multijurisdictional challenges and regulatory conflicts at each end of the operational risk spectrum. We find that compliance analytics are both shaping and reporting regulatory matters yet often firms may have difficulties in recruiting individuals with relevant but diverse skill sets. The cases also underscore the increasing need for financial organizations to adopt robust information governance policies and processes to ease future remediation efforts.
Resumo:
This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.