874 resultados para Problematic internet use
Resumo:
Au cours des dernières années, le domaine de la consommation a grandement évolué. Les agents de marketing ont commencé à utiliser l’Internet pour influencer les consommateurs en employant des tactiques originales et imaginatives qui ont rendus possible l’atteinte d'un niveau de communication interpersonnelle qui avait précédemment été insondable. Leurs interactions avec les consommateurs, en utilisant la technologie moderne, se manifeste sous plusieurs formes différentes qui sont toutes accompagnés de leur propre assortiment de problèmes juridiques. D’abord, il n'est pas rare pour les agents de marketing d’utiliser des outils qui leur permettent de suivre les actions des consommateurs dans le monde virtuel ainsi que dans le monde physique. Les renseignements personnels recueillis d'une telle manière sont souvent utilisés à des fins de publicité comportementale en ligne – une utilisation qui ne respecte pas toujours les limites du droit à la vie privée. Il est également devenu assez commun pour les agents de marketing d’utiliser les médias sociaux afin de converser avec les consommateurs. Ces forums ont aussi servi à la commission d’actes anticoncurrentiels, ainsi qu’à la diffusion de publicités fausses et trompeuses – deux pratiques qui sont interdites tant par la loi sur la concurrence que la loi sur la protection des consommateurs. Enfin, les agents de marketing utilisent diverses tactiques afin de joindre les consommateurs plus efficacement en utilisant diverses tactiques qui les rendent plus visible dans les moteurs de recherche sur Internet, dont certaines sont considérés comme malhonnêtes et pourraient présenter des problèmes dans les domaines du droit de la concurrence et du droit des marques de commerce. Ce mémoire offre une description détaillée des outils utilisés à des fins de marketing sur Internet, ainsi que de la manière dont ils sont utilisés. Il illustre par ailleurs les problèmes juridiques qui peuvent survenir à la suite de leur utilisation et définit le cadre législatif régissant l’utilisation de ces outils par les agents de marketing, pour enfin démontrer que les lois qui entrent en jeu dans de telles circonstances peuvent, en effet, se révéler bénéfiques pour ces derniers d'un point de vue économique.
Resumo:
This thesis has been realised through a scholarship offered by the Government of Canada to the Government of the Republic of Mauritius under the Programme Canadien de Bourses de la Francophonie
Resumo:
Le présent mémoire porte sur les stratégies de gestion de rumeurs par les organisations sur Internet et sur les réseaux sociaux. Il se veut une étude dite « ventriloque » des figures d’autorité mises en présence par les organisations et les internautes à travers leurs interactions. L'objectif de cette recherche est ainsi d’étudier les stratégies employées par les organisations pour gérer les rumeurs sur Internet et d’observer les interactions entre l’organisation et ses consommateurs afin de comprendre le rapport des organisations avec leurs consommateurs, réels ou potentiels, grâce aux figures mises en scène et invoquées dans leurs stratégies. Comme nous le montrons dans nos analyses, les organisations mettent en scène une multitude de figures d’autorité pour convaincre leurs consommateurs. En même temps, elles se positionnent comme sujettes à des agentivités qui sont contextuelles aux rumeurs auxquelles elles font face. De la même façon, les internautes mettent en scène les préoccupations qui les animent. Les dialogues entre organisations et internautes reflètent différentes relations entre ces deux parties. En particulier, nous montrons que les organisations n’interagissent pas toutes de la même manière avec les internautes. Cette analyse s’appuie sur des données récoltées sur les sites internet des organisations étudiées et sur les réseaux Facebook et Twitter.
Resumo:
Indian economy is witnessing stellar growth over the last few years. There have been rapid developments in infrastructural and business front during the growth period.Internet adoption among Indians has been increasing over the last one decade.Indian banks have also risen to the occasion by offering new channels of delivery to their customers.Internet banking is one such new channel which has become available to Indian customers.Customer acceptance for internet banking has been good so far.In this study the researcher tried to conduct a qualitative and quantitative investigation of internet banking customer acceptance among Indians. The researcher tried to identify important factors that affect customer's behavioral intention for internet banking .The researcher also proposes a research model which has extended from Technology Acceptance Model for predicting internet banking acceptance.The findings of the study would be useful for Indian banks in planning and upgrading their internet banking service.Banks could increase internet banking adoption by making their customer awareness about the usefulness of the service.It is seen that from the study that the variable perceived usefulness has a positive influence on internet banking use,therefore internet banking acceptance would increase when customers find it more usefulness.Banks should plan their marketing campaigns taking into consideration this factor.Proper marketing communications which would increase consumer awareness would result in better acceptance of internet banking.The variable perceived ease of use had a positive influence on internet banking use.That means customers would increase internet banking usage when they find it easier to use.Banks should therefore try to develop their internet banking site and interface easier to use.Banks could also consider providing practical training sessions for customers at their branches on usage of internet banking interface.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
The focus of this work is to provide authentication and confidentiality of messages in a swift and cost effective manner to suit the fast growing Internet applications. A nested hash function with lower computational and storage demands is designed with a view to providing authentication as also to encrypt the message as well as the hash code using a fast stream cipher MAJE4 with a variable key size of 128-bit or 256-bit for achieving confidentiality. Both nested Hash function and MAJE4 stream cipher algorithm use primitive computational operators commonly found in microprocessors; this makes the method simple and fast to implement both in hardware and software. Since the memory requirement is less, it can be used for handheld devices for security purposes.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes
Resumo:
In human Population Genetics, routine applications of principal component techniques are often required. Population biologists make widespread use of certain discrete classifications of human samples into haplotypes, the monophyletic units of phylogenetic trees constructed from several single nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypes are recorded within the different samples. Principal component techniques are then required as a dimension-reducing strategy to bring the dimension of the problem to a manageable level, say two, to allow for graphical analysis. Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principal components. In this short note we present our experience with using traditional linear principal components or compositional principal components based on logratios, with reference to a specific dataset
Resumo:
In 2000 the European Statistical Office published the guidelines for developing the Harmonized European Time Use Surveys system. Under such a unified framework, the first Time Use Survey of national scope was conducted in Spain during 2002– 03. The aim of these surveys is to understand human behavior and the lifestyle of people. Time allocation data are of compositional nature in origin, that is, they are subject to non-negativity and constant-sum constraints. Thus, standard multivariate techniques cannot be directly applied to analyze them. The goal of this work is to identify homogeneous Spanish Autonomous Communities with regard to the typical activity pattern of their respective populations. To this end, fuzzy clustering approach is followed. Rather than the hard partitioning of classical clustering, where objects are allocated to only a single group, fuzzy method identify overlapping groups of objects by allowing them to belong to more than one group. Concretely, the probabilistic fuzzy c-means algorithm is conveniently adapted to deal with the Spanish Time Use Survey microdata. As a result, a map distinguishing Autonomous Communities with similar activity pattern is drawn. Key words: Time use data, Fuzzy clustering; FCM; simplex space; Aitchison distance
Resumo:
"Internet for Image Searching" is a free online tutorial to help staff and students in universities and colleges to find digital images for their learning and teaching. The emphasis of the tutorial is on finding copyright cleared images which are available free; facilitating quick, hassle-free access to a vast range of online photographs and other visual resources. "This tutorial is an excellent resource for anyone needing to know more about where and how to find images online. The fact that it concentrates on copyright cleared images will make it even more valuable for busy learning and teaching professionals, researchers and students alike. It will also serve to inspire confidence in those needing to use images from the web in their work." (Sharon Waller of the Higher Education Academy).
Resumo:
En este artículo se describen y analizan los resultados de un estudio exploratorio sobre el uso de Internet -más concretamente el World Wide Web (WWW)- por parte de un grupo de estudiantes universitarios para documentarse a la hora de desarrollar tareas académicas. Los datos -obtenidos a través de un cuestionario auto-administrado - ponen de manifiesto la escasa competencia de este alumnado para la búsqueda y manejo de información contenida en el ciberespacio.
Resumo:
The proliferation of Web-based learning objects makes finding and evaluating online resources problematic. While established Learning Analytics methods use Web interaction to evaluate learner engagement, there is uncertainty regarding the appropriateness of these measures. In this paper we propose a method for evaluating pedagogical activity in Web-based comments using a pedagogical framework, and present a preliminary study that assigns a Pedagogical Value (PV) to comments. This has value as it categorises discussion in terms of pedagogical activity rather than Web interaction. Results show that PV is distinct from typical interactional measures; there are negative or insignificant correlations with established Learning Analytics methods, but strong correlations with relevant linguistic indicators of learning, suggesting that the use of pedagogical frameworks may produce more accurate indicators than interaction analysis, and that linguistic rather than interaction analysis has the potential to automatically identify learning behaviour.
Resumo:
IBM provide a comprehensive academic initiative, (http://www-304.ibm.com/ibm/university/academic/pub/page/academic_initiative) to universities, providing them free of charge access to a wide range of IBM Software. As part of this initiative we are currently offering free IBM Bluemix accounts, either to be used within a course, or for students to use for personal skills development. IBM Bluemix provides a comprehensive cloud based platform as a service solution set which includes the ability to quickly and easily integrate data from devices from Internet of Things ( IoT) solutions to develop and run productive and user focused web and mobile applications. If you would be interested in hearing more about IBM and Internet of Things or you would like to discuss prospective research projects that you feel would operate well in this environment, please come along to the seminar!