857 resultados para Internet (Computer networks)
Resumo:
Avian pathogenic Escherichia coli (APEC) is responsible for various pathological processes in birds and is considered as one of the principal causes of morbidity and mortality, associated with economic losses to the poultry industry. The objective of this study was to demonstrate that it is possible to predict antimicrobial resistance of 256 samples (APEC) using 38 different genes responsible for virulence factors, through a computer program of artificial neural networks (ANNs). A second target was to find the relationship between (PI) pathogenicity index and resistance to 14 antibiotics by statistical analysis. The results showed that the RNAs were able to make the correct classification of the behavior of APEC samples with a range from 74.22 to 98.44%, and make it possible to predict antimicrobial resistance. The statistical analysis to assess the relationship between the pathogenic index (PI) and resistance against 14 antibiotics showed that these variables are independent, i.e. peaks in PI can happen without changing the antimicrobial resistance, or the opposite, changing the antimicrobial resistance without a change in PI.
Resumo:
The overall goal of the study was to describe nurses’ acceptance of an Internet-based support system in the care of adolescents with depression. The data were collected in four phases during the period 2006 – 2010 from nurses working in adolescent psychiatric outpatient clinics and from professionals working with adolescents in basic public services. In the first phase, the nurses’ anticipated perceptions of the usefulness of the Internet-based support system before its implementation was explored. In the second phase, the nurses’ perceived ease of computer and Internet use and attitudes toward it were explored. In the third phase, the features of the support system and its implementation process were described. In the fourth phase, the nurses’ experiences of behavioural intention and actual system use of the Internet-based support were described in psychiatric out-patient care after one year use. The Technology Acceptance Model (TAM) was used to structure the various research phases. Several benefits were identified from the nurses’ perspective in using the Internet-based support system in the care of adolescents with depression. The nurses’ technology skills were good and their attitudes towards computer use were positive. The support system was developed in various phases to meet the adolescents’ needs. Before the implementation of the information technology (IT)-based support system, it is important to pay attention to the nurses’ IT-training, technology support, resources, and safety as well as ethical issues related to the support system. After one year of using the system, the nurses perceived the Internet-based support system to be useful in the care of adolescents with depression. The adolescents’ independent work with the support system at home and the program’s systematic character were experienced as conducive from the point of view of the treatment. However, the Internet-based support system was integrated only partly into the nurseadolescent interaction even though the nurses’ perceptions of it were positive. The use of the IT-based system as part of the adolescents’ depression care was seen positively and its benefits were recognized. This serves as a good basis for future IT-based techniques. Successful implementations of IT-based support systems need a systematic implementation plan and commitment from the part of the organization and its managers. Supporting and evaluating the implementation of an IT-based system should pay attention to changing the nurses’ work styles. Health care organizations should be offered more flexible opportunities to utilize IT-based systems in direct patient care in the future.
Resumo:
Although there is a consensus in th~ literature on the many uses of the Internet in education, as well as the unique features of the Internet for presenting facts and information, there is no consensus on a standardized method for evaluating Internetbased courseware. Educators rarely have the opportunity to participate in the development of Internet-based courseware, yet they are encouraged to use the technology in their learning environments. This creates a need for summative evaluation methods for Internet-based health courseware. The purpose ofthis study was to assess evaluative measures for Internet-based courseware. Specifically, two entities were evaluated within the study: a) the outcome of the Internet-based courseware, and b) the Internet-based courseware itself. To this end, the Web site www.bodymatters.com was evaluated using two different approaches by two different cohorts. The first approach was a performance appraisal by a group of endusers. A positive, statistically significant change in the students performance was observed due to the intervention ofthe Web site. The second approach was a productoriented evaluation ofthe Web site with the use of a criterion-based checklist and an open-ended comments section. The findings indicate that a summative, criterion-based evaluation is best completed by a multidisciplinary team. The findi~gs also indicated that the two different cohorts reported different product-oriented appraisals of the Web site. The current research confirmed previous research that found that experts returning a poor evaluation of a Web site did not have a relationship to whether or not the end-users performance improved due to the intervention of the Web site.
Resumo:
This study had three purposes related to the effective implem,entation and practice of computer-mediated online distance education (C-MODE) at the elementary level: (a) To identify a preliminary framework of criteria 'or guidelines for effective implementation and practice, (b) to identify areas ofC-MODE for which criteria or guidelines of effectiveness have not yet been developed, and (c) to develop an implementation and practice criteria questionnaire based on a review of the distance education literature, and to use the questionnaire in an exploratory survey of elementary C-MODE practitioners. Using the survey instrument, the beliefs and attitudes of 16 elementary C'- MODE practitioners about what constitutes effective implementation and practice principles were investigated. Respondents, who included both administrators and instructors, provided information about themselves and the program in which they worked. They rated 101 individual criteria statenlents on a 5 point Likert scale with a \. point range that included the values: 1 (Strongly Disagree), 2 (Disagree), 3 (Neutral or Undecided), 4 (Agree), 5 (Strongly Agree). Respondents also provided qualitative data by commenting on the individual statements, or suggesting other statements they considered important. Eighty-two different statements or guidelines related to the successful implementation and practice of computer-mediated online education at the elementary level were endorsed. Response to a small number of statements differed significantly by gender and years of experience. A new area for investigation, namely, the role ofparents, which has received little attention in the online distance education literature, emerged from the findings. The study also identified a number of other areas within an elementary context where additional research is necessary. These included: (a) differences in the factors that determine learning in a distance education setting and traditional settings, (b) elementary students' ability to function in an online setting, (c) the role and workload of instructors, (d) the importance of effective, timely communication with students and parents, and (e) the use of a variety of media.
Resumo:
The current study was an exploration of why some novices are more successful than their peers when learning from the Internet by examining the relations among time spent with relevant information and changes in invested mental effort during Internet navigations as well as achievement. Navigation behaviours and learner characteristics were investigated as predictors of time spent with relevant information and changes in mental effort. Undergraduates (N = 85, Mage = 20 years, 5 months) searched the Internet for information corresponding to a low knowledge topic for 20 min while their eye gaze and pupil size were recorded. Pupil diameter was used as an objective, continuous measure of mental effort. Participants also completed questionnaires or computer tasks pertaining to s e l f-regulated learning characteristics (general intrinsic goal orientation and effort regulation) and cognitive factors (working memory control, distractibility and cognitive style). All analyses controlled for general mental ability, reading comprehension, topic and Internet knowledge, and overall motivation. A greater proportion of time spent with relevant information predicted higher scores on an achievement test. Interestingly, time spent with relevant information partially mediated the positive relation between the frequency of increases in invested mental effort and achievement. Surprisingly, intrinsic goal orientation was negatively related to time spent with relevant information and effort regulation was negatively related to the frequency of increases in invested mental effort. These findings have implications for supports when novices guide their own learning, especially when using the Internet.
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks
Resumo:
Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.
Resumo:
The KCube interconnection network was first introduced in 2010 in order to exploit the good characteristics of two well-known interconnection networks, the hypercube and the Kautz graph. KCube links up multiple processors in a communication network with high density for a fixed degree. Since the KCube network is newly proposed, much study is required to demonstrate its potential properties and algorithms that can be designed to solve parallel computation problems. In this thesis we introduce a new methodology to construct the KCube graph. Also, with regard to this new approach, we will prove its Hamiltonicity in the general KC(m; k). Moreover, we will find its connectivity followed by an optimal broadcasting scheme in which a source node containing a message is to communicate it with all other processors. In addition to KCube networks, we have studied a version of the routing problem in the traditional hypercube, investigating this problem: whether there exists a shortest path in a Qn between two nodes 0n and 1n, when the network is experiencing failed components. We first conditionally discuss this problem when there is a constraint on the number of faulty nodes, and subsequently introduce an algorithm to tackle the problem without restrictions on the number of nodes.
Resumo:
Un résumé en français est également disponible.
Resumo:
"Mémoire présenté à la Faculté des études supérieures En vue de l'obtention du grade de LL.M. Dans le programme de maîtrise en droit"
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
Avec la montée en popularité d’Internet et des médias sociaux, de plus en plus d’organismes sociaux et publics, notamment, intègrent des plateformes Web à leurs volets traditionnels. La question d’Internet demeure toutefois peu étudiée eu égard à la publicité sociale. Ce mémoire porte donc sur la question du Web en relation avec les campagnes sociales adressées aux jeunes Québécois de 18 à 25 ans, une population particulièrement réceptive aux nouvelles technologies. Plus exactement, dans cette étude, nous avons analysé trois sites Web rattachés à des campagnes sociales (La vitesse, ça coûte cher de la SAAQ, Les ITSS se propagent du MSSS et 50 000 adeptes, 5 000 toutous de la Fondation CHU Sainte-Justine) dans l’objectif de déterminer leurs forces et leurs faiblesses pour ensuite proposer des pistes pour leur optimisation. C’est à l’aide d’une analyse critique de contenu suivie d’entrevues et d’observations individuelles auprès de 19 participants que nous sommes parvenue à suggérer des pistes pour l’optimisation des sites Web de campagnes sociales destinées aux jeunes adultes québécois. Une des plus grandes difficultés en ce qui a trait à leur conception consiste à choisir les stratégies les plus appropriées pour provoquer un changement d’attitude ou de comportement, a fortiori chez ceux qui adoptent des comportements à risque (fumer, conduire en état d’ébriété, avoir des relations sexuelles non protégées); des stratégies qui, pour être plus efficaces, devraient être adaptées en fonction des caractéristiques propres aux publics cibles et aux médias de diffusion. Afin d’analyser adéquatement les campagnes sociales, nous avons fait appel aux théories de la persuasion et aux théories sur l’influence des médias jugées pertinentes dans notre contexte puisqu’elles sont propres à ce type d’étude. Ces approches combinées nous ont permis d’intégrer à l’analyse d’une campagne donnée les contextes qui l’entourent et les pratiques dans lesquelles elle s’inscrit. Cette étude nous a, entre autres, permis de démontrer qu’il existait d’importants écarts entre les attentes et les besoins des internautes et l’offre des sites Web étudiés.
Resumo:
Dans un contexte où les virus informatiques présentent un risque sérieux pour les réseaux à travers le globe, il est impératif de retenir la responsabilité des compagnies qui n’y maintiennent pas une sécurité adéquate. À ce jour, les tribunaux québécois n’ont pas encore été saisis d’affaires en responsabilité pour des virus informatiques. Cet article brosse un portrait général de la responsabilité entourant les virus informatiques en fonction des principes généraux de responsabilité civile en vigueur au Québec. L’auteur propose des solutions pour interpréter les trois critères traditionnels la faute, le dommage et le lien causal en mettant l’accent sur l’obligation de précaution qui repose sur les épaules de l’administrateur de réseau. Ce joueur clé pourrait bénéficier de l’adoption de dispositions générales afin de limiter sa responsabilité. De plus, les manufacturiers et les distributeurs peuvent également partager une partie de la responsabilité en proportion de la gravité de leur faute. Les entreprises ont un devoir légal de s’assurer que leurs systèmes sont sécuritaires afin de protéger les intérêts de leurs clients et des tiers.
Resumo:
Dans l’espace réel, l’identité d’une personne est clairement circonscrite à l’état civil et pleinement protégée par le droit interne des pays. Alors que dans le cyberespace, les contours de la notion sont plutôt flous, voire incertains. Le développement du commerce électronique et la croissance des transactions en ligne ont donné naissance au « crime » de l’usurpation d’identité. Et si l’usurpation d’identité a pu émerger, c’est grâce à la spécificité du médium, qui s’est avéré un terrain fertile aux abus des usurpateurs d’identité. Ce présent article étudie et analyse la fraude, le vol et l’escroquerie en tant qu’infractions économiques commises dans le cyberespace par le biais du système informatique. Il constate la désuétude et l’inefficacité des infractions prévues dans le droit pénal canadien relativement à l’incrimination du crime de l’usurpation d’identité et propose une solution basée sur des approches réglementaires, législatives et techniques.