959 resultados para PERSONAL NETWORK SIZE
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Happy emotional states have not been extensively explored in functional magnetic resonance imaging studies using autobiographic recall paradigms. We investigated the brain circuitry engaged during induction of happiness by standardized script-driven autobiographical recall in 11 healthy subjects (6 males), aged 32.4 ± 7.2 years, without physical or psychiatric disorders, selected according to their ability to vividly recall personal experiences. Blood oxygen level-dependent (BOLD) changes were recorded during auditory presentation of personal scripts of happiness, neutral content and negative emotional content (irritability). The same uniform structure was used for the cueing narratives of both emotionally salient and neutral conditions, in order to decrease the variability of findings. In the happiness relative to the neutral condition, there was an increased BOLD signal in the left dorsal prefrontal cortex and anterior insula, thalamus bilaterally, left hypothalamus, left anterior cingulate gyrus, and midportions of the left middle temporal gyrus (P < 0.05, corrected for multiple comparisons). Relative to the irritability condition, the happiness condition showed increased activity in the left insula, thalamus and hypothalamus, and in anterior and midportions of the inferior and middle temporal gyri bilaterally (P < 0.05, corrected), varying in size between 13 and 64 voxels. Findings of happiness-related increased activity in prefrontal and subcortical regions extend the results of previous functional imaging studies of autobiographical recall. The BOLD signal changes identified reflect general aspects of emotional processing, emotional control, and the processing of sensory and bodily signals associated with internally generated feelings of happiness. These results reinforce the notion that happiness induction engages a wide network of brain regions.
Resumo:
Mobile malwares are increasing with the growing number of Mobile users. Mobile malwares can perform several operations which lead to cybersecurity threats such as, stealing financial or personal information, installing malicious applications, sending premium SMS, creating backdoors, keylogging and crypto-ransomware attacks. Knowing the fact that there are many illegitimate Applications available on the App stores, most of the mobile users remain careless about the security of their Mobile devices and become the potential victim of these threats. Previous studies have shown that not every antivirus is capable of detecting all the threats; due to the fact that Mobile malwares use advance techniques to avoid detection. A Network-based IDS at the operator side will bring an extra layer of security to the subscribers and can detect many advanced threats by analyzing their traffic patterns. Machine Learning(ML) will provide the ability to these systems to detect unknown threats for which signatures are not yet known. This research is focused on the evaluation of Machine Learning classifiers in Network-based Intrusion detection systems for Mobile Networks. In this study, different techniques of Network-based intrusion detection with their advantages, disadvantages and state of the art in Hybrid solutions are discussed. Finally, a ML based NIDS is proposed which will work as a subsystem, to Network-based IDS deployed by Mobile Operators, that can help in detecting unknown threats and reducing false positives. In this research, several ML classifiers were implemented and evaluated. This study is focused on Android-based malwares, as Android is the most popular OS among users, hence most targeted by cyber criminals. Supervised ML algorithms based classifiers were built using the dataset which contained the labeled instances of relevant features. These features were extracted from the traffic generated by samples of several malware families and benign applications. These classifiers were able to detect malicious traffic patterns with the TPR upto 99.6% during Cross-validation test. Also, several experiments were conducted to detect unknown malware traffic and to detect false positives. These classifiers were able to detect unknown threats with the Accuracy of 97.5%. These classifiers could be integrated with current NIDS', which use signatures, statistical or knowledge-based techniques to detect malicious traffic. Technique to integrate the output from ML classifier with traditional NIDS is discussed and proposed for future work.
Resumo:
This project examines students in a private school in southwestern Ontario on a 17 -day Costa Rica Outward Bound Rainforest multielement course. The study attempted to discover whether voluntary teenage participants could increase their self-perceptions of life effectiveness by participating in a 17-day expedition. A total of9 students participated in the study. The experimental design that was implemented was a mixed methods design. Participants filled in a Life Effectiveness Questionnaire (LEQ) at four predesignated times during the study. These time intervals occurred (a) before the trip commenced, (b) the first day of the trip, ( c) the last day of the trip, and (d) 1 month after the trip ended. Fieldnotes and recordings from informal group debriefing sessions were also used to gather information. Data collected in this study were analyzed in a variety of ways by the researcher. Analyses that were run on the data included the Friedman test for covariance, means, medians, and the Wilcoxon Pairs Test. The questionnaires were analyzed quantitatively, and the fieldnotes were analyzed qualitatively. Nonparametric statistical analysis was implemented as a result of the small group size of participants. Both sets of data were grouped and discussed according to similarities and differences. The data indicate that voluntary teenage participants experience significant changes over time in the areas of time management, social competency, emotional control, active initiative, and self-confidence. The types of outcomes from this study illustrate that Outward Bound-type opportunities should be offered to teenagers in Ontario schools as a means to bring about self-development.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
While service-learning is often said to be beneficial for all those involved—students, community members, higher education institutions, and faculty members—there are relatively few studies of the attraction to, and effect of, service-learning on faculty members. Existing studies have tended to use a survey design, and to be based in the United States. There is a lack of information on faculty experiences with service-learning in Ontario or Canada. This qualitative case study of faculty experiences with service-learning was framed through an Appreciative Inquiry social constructionist approach. The data were drawn from interviews with 18 faculty members who belong to a Food Security Research Network (FSRN) at a university in northern Ontario, reports submitted by the network, and personal observation of a selection of network-related events. This dissertation study revealed how involvement with service-learning created opportunities for faculty learning and growth. The focus on food security and a commitment to the sustainability of local food production was found to be an ongoing attraction to service-learning and a means to engage in and integrate research and teaching on matters of personal and professional importance to these faculty members. The dissertation concludes with a discussion of the FSRN’s model and the perceived value of a themed, transdisciplinary approach to service-learning. This study highlights promising practices for involving faculty in service-learning and, in keeping with an Appreciative Inquiry approach, depicts a view of faculty work at its best.
Resumo:
Thèse par articles. Articles (4) annexés à la thèse en fichiers complémentaires.
Resumo:
Cette thèse étudie une approche intégrant la gestion de l’horaire et la conception de réseaux de services pour le transport ferroviaire de marchandises. Le transport par rail s’articule autour d’une structure à deux niveaux de consolidation où l’affectation des wagons aux blocs ainsi que des blocs aux services représentent des décisions qui complexifient grandement la gestion des opérations. Dans cette thèse, les deux processus de consolidation ainsi que l’horaire d’exploitation sont étudiés simultanément. La résolution de ce problème permet d’identifier un plan d’exploitation rentable comprenant les politiques de blocage, le routage et l’horaire des trains, de même que l’habillage ainsi que l’affectation du traffic. Afin de décrire les différentes activités ferroviaires au niveau tactique, nous étendons le réseau physique et construisons une structure de réseau espace-temps comprenant trois couches dans lequel la dimension liée au temps prend en considération les impacts temporels sur les opérations. De plus, les opérations relatives aux trains, blocs et wagons sont décrites par différentes couches. Sur la base de cette structure de réseau, nous modélisons ce problème de planification ferroviaire comme un problème de conception de réseaux de services. Le modèle proposé se formule comme un programme mathématique en variables mixtes. Ce dernie r s’avère très difficile à résoudre en raison de la grande taille des instances traitées et de sa complexité intrinsèque. Trois versions sont étudiées : le modèle simplifié (comprenant des services directs uniquement), le modèle complet (comprenant des services directs et multi-arrêts), ainsi qu’un modèle complet à très grande échelle. Plusieurs heuristiques sont développées afin d’obtenir de bonnes solutions en des temps de calcul raisonnables. Premièrement, un cas particulier avec services directs est analysé. En considérant une cara ctéristique spécifique du problème de conception de réseaux de services directs nous développons un nouvel algorithme de recherche avec tabous. Un voisinage par cycles est privilégié à cet effet. Celui-ci est basé sur la distribution du flot circulant sur les blocs selon les cycles issus du réseau résiduel. Un algorithme basé sur l’ajustement de pente est développé pour le modèle complet, et nous proposons une nouvelle méthode, appelée recherche ellipsoidale, permettant d’améliorer davantage la qualité de la solution. La recherche ellipsoidale combine les bonnes solutions admissibles générées par l’algorithme d’ajustement de pente, et regroupe les caractéristiques des bonnes solutions afin de créer un problème élite qui est résolu de facon exacte à l’aide d’un logiciel commercial. L’heuristique tire donc avantage de la vitesse de convergence de l’algorithme d’ajustement de pente et de la qualité de solution de la recherche ellipsoidale. Les tests numériques illustrent l’efficacité de l’heuristique proposée. En outre, l’algorithme représente une alternative intéressante afin de résoudre le problème simplifié. Enfin, nous étudions le modèle complet à très grande échelle. Une heuristique hybride est développée en intégrant les idées de l’algorithme précédemment décrit et la génération de colonnes. Nous proposons une nouvelle procédure d’ajustement de pente où, par rapport à l’ancienne, seule l’approximation des couts liés aux services est considérée. La nouvelle approche d’ajustement de pente sépare ainsi les décisions associées aux blocs et aux services afin de fournir une décomposition naturelle du problème. Les résultats numériques obtenus montrent que l’algorithme est en mesure d’identifier des solutions de qualité dans un contexte visant la résolution d’instances réelles.
Resumo:
La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.
Resumo:
Composite Fe3O4–SiO2 materials were prepared by the sol–gel method with tetraethoxysilane and aqueous-based Fe3O4 ferrofluids as precursors. The monoliths obtained were crack free and showed both optical and magnetic properties. The structural properties were determined by infrared spectroscopy, x-ray diffractometry and transmission electron microscopy. Fe3O4 particles of 20 nm size lie within the pores of the matrix without any strong Si–O–Fe bonding. The well established silica network provides effective confinement to these nanoparticles. The composites were transparent in the 600–800 nm regime and the field dependent magnetization curves suggest that the composite exhibits superparamagnetic characteristics
Resumo:
A key argument for modeling knowledge in ontologies is the easy re-use and re-engineering of the knowledge. However, beside consistency checking, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as (labeled, directed) graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA). While social network structures in general currently receive high attention in the Semantic Web community, there are only very few SNA applications up to now, and virtually none for analyzing the structure of ontologies. We illustrate in this paper the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality based on Hermitian matrices, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size.
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.
Resumo:
Se realizó un estudio transversal, se incluyeron 3 residentes no cardiólogos y se les dio formación básica en ecocardiografía (horas teóricas 22, horas prácticas 65), con recomendaciones de la Sociedad Americana de Ecocardiografia y aportes del aprendizaje basado en problemas, con el desarrollo de competencia técnicas y diagnósticas necesarias, se realizó el análisis de concordancia entre residentes y ecocardiografistas expertos, se recolectaron 122 pacientes hospitalizados que cumplieran con los criterios de inclusión y exclusión, se les realizo un ecocardiograma convencional por el experto y una valoración ecocardiográfica por el residente, se evaluó la ventana acústica, contractilidad, función del ventrículo izquierdo y derrame pericárdico. La hipótesis planteada fue obtener una concordancia moderada. Resultados: Se analizó la concordancia entre observadores para la contractilidad miocárdica (Kappa: 0,57 p=0,000), función sistólica del ventrículo izquierdo (Kappa 0,54 p=0.000) siendo esta moderada por estar entre 0,40 – 0,60 y con una alta significancia estadística, para la calidad de la ventana acústica (Kappa: 0,22 p= 0.000) y presencia de derrame pericárdico (Kappa: 0,26 p= 0.000) se encontró una escasa concordancia ubicándose entre 0,20 – 0,40. Se estableció una sensibilidad de 90%, especificidad de 67%, un valor predictivo positivo de 80% y un valor predictivo negativo de 85% para el diagnóstico de disfunción sistólica del ventrículo izquierdo realizado por los residentes.
Resumo:
En este proyecto analizaremos como las organizaciones se relacionan con el medio y marketing. La idea es determinar cuáles son los métodos de análisis de las comunidades de clientes mediante la relación estratégica comunitaria y el marketing. Por medio del mercadeo se puede conocer el entorno y determinar qué métodos de análisis utilizar para conocer a la comunidad de clientes. Las personas de mercadeo se ocupan de todo lo que ocurre en el entorno, de estar al tanto para saber cuándo hay oportunidades que puedan ser provechosas para la organización o por otro lado cuando hay amenazas de las que debe tener cuidado. Dependiendo del entorno, la organización diseña sus actividades de mercadeo enfocadas en satisfacer las necesidades del consumidor. Las actividades del consumidor se conceptualizan en producto, precio, promoción y plaza que se definen y diseñan basados en la comunidad en la que este inmersa la organización. Es importante buscar información confiable sobre el grupo objetivo al cual se le va ofrecer el producto o servicio, ya que toca analizarlos y comprender a estas personas para diseñar una buena oferta que satisfaga sus necesidades y deseos. Esta persona que recibe el producto o servicio por parte de la organización es el cliente. Los clientes son las personas que llegan a una organización en búsqueda de satisfacer necesidades a través de los bienes y servicios que las empresas ofrecen. Es esencial determinar que los clientes viven en comunidad, es decir comparten ideas por la comunicación tan estrecha que tienen y viven en conjunto bajo las mismas costumbres. Debido a estos es que hoy en día, los consumidores se conglomeran en comunidades de clientes, y para saberles llegar a estos clientes, toca analizarlos por medio de diversos métodos. El uso de las estrategias comunitarias es necesario ya que por medio del marketing se analiza el entorno y se buscan los métodos para analizar a la comunidad de clientes, que comparten características y se analizan en conjunto no por individuo. Es necesario identificar los métodos para relacionarse con la comunidad de clientes, para poder acercarnos a estos y conocerlos bien, saber sus necesidades y deseos y ofrecerles productos y servicios de acuerdo a éstos. En la actualidad estos métodos no son muy comunes ni conocidos, es por esto que nuestro propósito es indagar e identificar estos métodos para saber analizar a las comunidades. En este proyecto se utilizara una metodología de estudio tipo teórico-conceptual buscando las fuentes de información necesarias para llevar a cabo nuestra investigación. Se considera trabajar con El Grupo de Investigación en Perdurabilidad Empresarial y se escogió la línea de gerencia ya que permite entrar en la sociedad del conocimiento, siendo capaces de identificar oportunidades gerenciales en el entorno. Es interesante investigar sobre estos métodos, ya que los clientes esperan un servicio excelente, atento y que se preocupe por ellos y sus necesidades.
Resumo:
We study competition in two sided markets with common network externality rather than with the standard inter-group e¤ects. This type of externality occurs when both groups bene t, possibly with di¤erent intensities, from an increase in the size of one group and from a decrease in the size of the other. We explain why common externality is relevant for the health and education sectors. We focus on the symmetric equilibrium and show that when the externality itself satis es an homogeneity condition then platforms pro ts and price structure have some speci c properties. Our results reveal how the rents coming from network externalities are shifted by platforms from one side to other, according to the homogeneity degree. In the speci c but realistic case where the common network externality is homogeneous of degree zero, platform s pro t do not depend on the intensity of the (common) network externality. This is in sharp contrast to conventional results stating that the presence of network externalities in a two-sided market structure increases the intensity of competition when the externality is positive (and decreases it when the externality is negative). Prices are a¤ected but in such a way that platforms only transfer rents from consumers to providers.