72 resultados para Personalisation
Resumo:
Personalised video can be achieved by inserting objects into a video play-out according to the viewer's profile. Content which has been authored and produced for general broadcast can take on additional commercial service features when personalised either for individual viewers or for groups of viewers participating in entertainment, training, gaming or informational activities. Although several scenarios and use-cases can be envisaged, we are focussed on the application of personalised product placement. Targeted advertising and product placement are currently garnering intense interest in the commercial networked media industries. Personalisation of product placement is a relevant and timely service for next generation online marketing and advertising and for many other revenue generating interactive services. This paper discusses the acquisition and insertion of media objects into a TV video play-out stream where the objects are determined by the profile of the viewer. The technology is based on MPEG-4 standards using object based video and MPEG-7 for metadata. No proprietary technology or protocol is proposed. To trade the objects into the video play-out, a Software-as-a-Service brokerage platform based on intelligent agent technology is adopted. Agencies, libraries and service providers are represented in a commercial negotiation to facilitate the contractual selection and usage of objects to be inserted into the video play-out.
Resumo:
Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.
Resumo:
Cette recherche porte sur l’acte d’invitation au mariage eta pour but,d’abord, de révéler les caractéristiques de tel acte dans les cartes d’invitation en français et en vietnamien puis d’identifier des ressemblances et des différences qui sont culturellement déterminées. Pour ce faire, nous avons décrit la formulation et le fonctionnement pragmatique des cartes d’invitation au mariage dans les deux langues du point de vue de la politesse linguistique. Le résultat de l’analyse nous a permis, par la suite, d’identifierles particularités dans la formulation des cartes ainsi que dansles stratégies de politesse privilégiées dans les deux communautés française et vietnamienne. Cette étude nous amène à conclure qu’à la différence de l’individualité qui caractérise la culture occidentale y compriscelle de la France, les Vietnamiens, de tradition de riziculture, mettent beaucoup plus d’importance sur le respect de l’honneur, et ont peur de perdre la face, de faire perdre celle de l’autre, d’être différents des autres, d’être hors du commun. Par conséquent,ils optent très souvent pour des modèles traditionnels de cartes d’invitation sans trop d’éléments de personnalisation et privilégient la politesse positive.
Resumo:
Ce mémoire étudie le phénomène de wedge politics sous un angle communicationnel, en proposant d’identifier et décrire les principales pratiques rhétoriques associées au déploiement d’une stratégie de wedge politics par les nombreux acteurs du débat public sur le projet de loi C-391, intitulé Loi modifiant le Code criminel et la Loi sur les armes à feu (abrogation du registre des armes d’épaule). La posture rhétorique que nous adoptons se traduit par une démarche méthodologique et des analyses imbriquées en quatre étapes: 1) l’élaboration d’une mise en perspective historique relativement élargie du débat public entourant le projet de loi C-391, 2) la recension des principaux acteurs et des discours qu’ils ont produits à un moment fort de ce débat, entre mai 2009 et mai 2011, 3) une première analyse et description générale de la dynamique rhétorique entre les acteurs du débat pendant cette période, et enfin, 4) une analyse systématique des discours échangés entre le 1er août 2010 et le 22 septembre 2010 nous permettant d’identifier et de décrire les principales pratiques rhétoriques employées par les acteurs. Les dix pratiques que nous avons relevées sont: l’appel à l’action, le scapegoating, le ciblage, la personnalisation du débat, le blâme, la dérision, l’attribution d’intentions malveillantes, la menace de représailles de la part des électeurs, l’exploitation des clivages et le contraste. En conclusion, nous discutons en quoi ces pratiques rhétoriques peuvent contribuer à l’atteinte des objectifs d’une stratégie de wedge politics.
Resumo:
Les politiques de confidentialité définissent comment les services en ligne collectent, utilisent et partagent les données des utilisateurs. Bien qu’étant le principal moyen pour informer les usagers de l’utilisation de leurs données privées, les politiques de confidentialité sont en général ignorées par ces derniers. Pour cause, les utilisateurs les trouvent trop longues et trop vagues, elles utilisent un vocabulaire souvent difficile et n’ont pas de format standard. Les politiques de confidentialité confrontent également les utilisateurs à un dilemme : celui d’accepter obligatoirement tout le contenu en vue d’utiliser le service ou refuser le contenu sous peine de ne pas y avoir accès. Aucune autre option n’est accordée à l’utilisateur. Les données collectées des utilisateurs permettent aux services en ligne de leur fournir un service, mais aussi de les exploiter à des fins économiques (publicités ciblées, revente, etc). Selon diverses études, permettre aux utilisateurs de bénéficier de cette économie de la vie privée pourrait restaurer leur confiance et faciliter une continuité des échanges sur Internet. Dans ce mémoire, nous proposons un modèle de politique de confidentialité, inspiré du P3P (une recommandation du W3C, World Wide Web Consortium), en élargissant ses fonctionnalités et en réduisant sa complexité. Ce modèle suit un format bien défini permettant aux utilisateurs et aux services en ligne de définir leurs préférences et besoins. Les utilisateurs ont la possibilité de décider de l’usage spécifique et des conditions de partage de chacune de leurs données privées. Une phase de négociation permettra une analyse des besoins du service en ligne et des préférences de l’utilisateur afin d’établir un contrat de confidentialité. La valeur des données personnelles est un aspect important de notre étude. Alors que les compagnies disposent de moyens leur permettant d’évaluer cette valeur, nous appliquons dans ce mémoire, une méthode hiérarchique multicritères. Cette méthode va permettre également à chaque utilisateur de donner une valeur à ses données personnelles en fonction de l’importance qu’il y accorde. Dans ce modèle, nous intégrons également une autorité de régulation en charge de mener les négociations entre utilisateurs et services en ligne, et de générer des recommandations aux usagers en fonction de leur profil et des tendances.
Resumo:
In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.
Resumo:
A self study course for learning to program using the C programming language has been developed. A Learning Object approach was used in the design of the course. One of the benefits of the Learning Object approach is that the learning material can be reused for different purposes. 'Me course developed is designed so that learners can choose the pedagogical approach most suited to their personal learning requirements. For all learning approaches a set of common Assessment Learning Objects (ALOs or tests) have been created. The design of formative assessments with ALOs can be carried out by the Instructional Designer grouping ALOs to correspond to a specific assessment intention. The course is non-credit earning, so there is no summative assessment, all assessment is formative. In this paper examples of ALOs and their uses is presented together with their uses as decided by the Instructional Designer and learner. Personalisation of the formative assessment of skills can be decided by the Instructional Designer or the learner using a repository of pre-designed ALOs. The process of combining ALOs can be carried out manually or in a semi-automated way using metadata that describes the ALO and the skill it is designed to assess.
Resumo:
The quality of information provision influences considerably knowledge construction driven by individual users’ needs. In the design of information systems for e-learning, personal information requirements should be incorporated to determine a selection of suitable learning content, instructive sequencing for learning content, and effective presentation of learning content. This is considered as an important part of instructional design for a personalised information package. The current research reveals that there is a lack of means by which individual users’ information requirements can be effectively incorporated to support personal knowledge construction. This paper presents a method which enables an articulation of users’ requirements based on the rooted learning theories and requirements engineering paradigms. The user’s information requirements can be systematically encapsulated in a user profile (i.e. user requirements space), and further transformed onto instructional design specifications (i.e. information space). These two spaces allow the discovering of information requirements patterns for self-maintaining and self-adapting personalisation that enhance experience in the knowledge construction process.
Resumo:
This article surveys and analyses democratic electoral reform in Europe since 1945 in order to pursue three issues. First, it seeks understanding of the processes through which electoral systems change. Second, it asks how the incidence of these processes varies over context and time. Third, it investigates whether there are relationships between the nature of the processes through which electoral system change occurs and the electoral reforms that are thereby adopted. The analysis suggests, most importantly, that electoral system changes occur via multiple contrasting processes, that there is a tendency towards increasing impact of mass opinion upon these changes, and that this is beginning to generate a trend towards greater personalisation in the electoral systems adopted. These findings are, however, preliminary; the article is intended to encourage further discussion and research.
Resumo:
The advancement of e-learning technologies has made it viable for developments in education and technology to be combined in order to fulfil educational needs worldwide. E-learning consists of informal learning approaches and emerging technologies to support the delivery of learning skills, materials, collaboration and knowledge sharing. E-learning is a holistic approach that covers a wide range of courses, technologies and infrastructures to provide an effective learning environment. The Learning Management System (LMS) is the core of the entire e-learning process along with technology, content, and services. This paper investigates the role of model-driven personalisation support modalities in providing enhanced levels of learning and trusted assimilation in an e-learning delivery context. We present an analysis of the impact of an integrated learning path that an e-learning system may employ to track activities and evaluate the performance of learners.
Resumo:
Background: Personalised nutrition (PN) may provide major health benefits to consumers. A potential barrier to the uptake of PN is consumers’ reluctance to disclose sensitive information upon which PN is based. This study adopts the privacy calculus to explore how PN service attributes contribute to consumers’ privacy risk and personalisation benefit perceptions. Methods: Sixteen focus groups (n = 124) were held in 8 EU countries and discussed 9 PN services that differed in terms of personal information, communication channel, service provider, advice justification, scope, frequency, and customer lock-in. Transcripts were content analysed. Results: The personal information that underpinned PN contributed to both privacy risk perception and personalisation benefit perception. Disclosing information face-to-face mitigated the perception of privacy risk and amplified the perception of personalisation benefit. PN provided by a qualified expert and justified by scientific evidence increased participants’ value perception. Enhancing convenience, offering regular face-to face support, and employing customer lock-in strategies were perceived as beneficial. Conclusion: This study suggests that to encourage consumer adoption, PN has to account for face-to-face communication, expert advice providers, support, a lifestyle-change focus, and customised offers. The results provide an initial insight into service attributes that influence consumer adoption of PN.
Resumo:
MyGrid is an e-Science Grid project that aims to help biologists and bioinformaticians to perform workflow-based in silico experiments, and help them to automate the management of such workflows through personalisation, notification of change and publication of experiments. In this paper, we describe the architecture of myGrid and how it will be used by the scientist. We then show how myGrid can benefit from agents technologies. We have identified three key uses of agent technologies in myGrid: user agents, able to customize and personalise data, agent communication languages offering a generic and portable communication medium, and negotiation allowing multiple distributed entities to reach service level agreements.
Resumo:
During recent years a consistent number of central nervous system (CNS) drugs have been approved and introduced on the market for the treatment of many psychiatric and neurological disorders, including psychosis, depression, Parkinson disease and epilepsy. Despite the great advancements obtained in the treatment of CNS diseases/disorders, partial response to therapy or treatment failure are frequent, at least in part due to poor compliance, but also genetic variability in the metabolism of psychotropic agents or polypharmacy, which may lead to sub-therapeutic or toxic plasma levels of the drugs, and finally inefficacy of the treatment or adverse/toxic effects. With the aim of improving the treatment, reducing toxic/side effects and patient hospitalisation, Therapeutic Drug Monitoring (TDM) is certainly useful, allowing for a personalisation of the therapy. Reliable analytical methods are required to determine the plasma levels of psychotropic drugs, which are often present at low concentrations (tens or hundreds of nanograms per millilitre). The present PhD Thesis has focused on the development of analytical methods for the determination of CNS drugs in biological fluids, including antidepressants (sertraline and duloxetine), antipsychotics (aripiprazole), antiepileptics (vigabatrin and topiramate) and antiparkinsons (pramipexole). Innovative methods based on liquid chromatography or capillary electrophoresis coupled to diode-array or laser-induced fluorescence detectors have been developed, together with the suitable sample pre-treatment for interference removal and fluorescent labelling in case of LIF detection. All methods have been validated according to official guidelines and applied to the analysis of real samples obtained from patients, resulting suitable for the TDM of psychotropic drugs.
Resumo:
Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.
Resumo:
Innovations in hardware and network technologies lead to an exploding number of non-interrelated parallel media streams. Per se this does not mean any additional value for consumers. Broadcasting and advertisement industries have not yet found new formats to reach the individual user with their content. In this work we propose and describe a novel digital broadcasting framework, which allows for the live staging of (mass) media events and improved consumer personalisation. In addition new professions for future TV production workflows which will emerge are described, namely the 'video composer' and the 'live video conductor'.