884 resultados para Network-based positioning
Resumo:
Speaker(s): Prof. Steffen Staab Organiser: Dr Tim Chown Time: 23/05/2014 10:30-11:30 Location: B53/4025 Abstract The Web is constructed based on our experiences in a multitude of modalities: text, networks, images, physical locations are some examples. Understanding the Web requires from us that we can model these modalities as they appear on the Web. In this talk I will show some examples of how we model text, hyperlink networks and physical-social systems in order to improve our understanding and our use of the Web.
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.
Resumo:
Un gran número de empresas están inmersas actualmente en espacios de mercado conocidos y saturados de competidores. La innovación constituye una de las principales alternativas de las empresas para encontrar su posicionamiento estratégico y adaptarse a los cambios del entorno (Kim & Mauborgne, 2005). Igualmente, Demirci (2013) asegura que la cultura es un factor clave en la innovación, dado que está fuertemente asociada con los valores, actitudes, comportamientos y prácticas organizacionales. Esta investigación abarca el estudio de la cultura organizacional y la innovación en el marco de estrategias de cooperación inter-organizacional donde se plantea que el grado de cooperación que existe entre las empresas tiene un efecto sobre los valores culturales y la incorporación de innovaciones en cada organización. Para esto se llevó a cabo una investigación cuantitativa con un alcance de tipo descriptivo y de carácter no experimental y trans seccional, cuya unidad de análisis fueron 20 empresas de la red ParqueSoft Manizales. Para la medición de las variables de innovación se aplicó un instrumento basado en el Manual de Oslo de la OECD y Eurostat (2005) el cual contempla la innovación de producto, proceso, mercadotecnia y organización. A nivel de los valores culturales, la medición se realizó a través de un cuestionario inspirado en el modelo de Hofstede (1980). Los resultados obtenidos permiten demostrar que existe un grado de relación entre la cooperación y los valores culturales ‘distancia al poder’ y ‘tolerancia a la incertidumbre´, sin poder determinar la relación con la generación de innovación de producto, proceso, mercadotecnia y organización, así como con las otras dimensiones del modelo de valores de Hofstede.
Resumo:
Stephen Downes, investigador del Canada's National Research Council, presenta su visión personal sobre la educación y los recursos libres. Los temas principales de su presentación son: Free and Open Source Software, Open Knowledge, Education and Technology
Resumo:
This paper shows how instructors can use the problem‐based learning method to introduce producer theory and market structure in intermediate microeconomics courses. The paper proposes a framework where different decision problems are presented to students, who are asked to imagine that they are the managers of a firm who need to solve a problem in a particular business setting. In this setting, the instructors’ role is to provide both guidance to facilitate student learning and content knowledge on a just‐in‐time basis
Resumo:
En aquesta tesi proposem dos esquemes de xarxa amb control d'admissió per al trànsit elàstic TCP amb mecanismes senzills. Ambdós esquemes són capaços de proporcionar throughputs diferents i aïllament entre fluxos, on un "flux" es defineix com una seqüència de paquets relacionats dins d'una connexió TCP. Quant a l'arquitectura, ambdós fan servir classes de paquets amb diferents prioritats de descart, i un control d'admissió implícit, edge-to-edge i basat en mesures. En el primer esquema, les mesures són per flux, mentre que en el segon, les mesures són per agregat. El primer esquema aconsegueix un bon rendiment fent servir una modificació especial de les fonts TCP, mentre que el segon aconsegueix un bon rendiment amb fonts TCP estàndard. Ambdós esquemes han estat avaluats satisfactòriament a través de simulació en diferents topologies de xarxa i càrregues de trànsit.
Resumo:
En les xarxes IP/MPLS sobre WDM on es transporta gran quantitat d'informacio, la capacitat de garantir que el trafic arriba al node de desti ha esdevingut un problema important, ja que la fallada d'un element de la xarxa pot resultar en una gran quantitat d'informacio perduda. Per garantir que el trafic afectat per una fallada arribi al node desti, s'han definit nous algoritmes d'encaminament que incorporen el coneixement de la proteccio en els dues capes: l'optica (WDM) i la basada en paquets (IP/MPLS). D'aquesta manera s'evita reservar recursos per protegir el trafic a les dues capes. Els nous algoritmes resulten en millor us dels recursos de la xarxa, ofereixen rapid temps de recuperacio, eviten la duplicacio de recursos i disminueixen el numero de conversions del trafic de senyal optica a electrica.
Resumo:
This dissertation focuses on the problem of providing mechanisms for routing point to point and multipoint connections in ATM networks. In general the notion of multipoint connection refers to connections that involve a group of users with more than two members. The main objective of this dissertation is to contribute to design efficient routing protocols with alterative routes in fully connected VP-based ATM Networks for call establishment of point to point and multipoint VC connections. An efficient route should be computed during this connection establishment phase.
Resumo:
La gestió de xarxes és un camp molt ampli i inclou molts aspectes diferents. Aquesta tesi doctoral està centrada en la gestió dels recursos en les xarxes de banda ampla que disposin de mecanismes per fer reserves de recursos, com per exemple Asynchronous Transfer Mode (ATM) o Multi-Protocol Label Switching (MPLS). Es poden establir xarxes lògiques utilitzant els Virtual Paths (VP) d'ATM o els Label Switched Paths (LSP) de MPLS, als que anomenem genèricament camins lògics. Els usuaris de la xarxa utilitzen doncs aquests camins lògics, que poden tenir recursos assignats, per establir les seves comunicacions. A més, els camins lògics són molt flexibles i les seves característiques es poden canviar dinàmicament. Aquest treball, se centra, en particular, en la gestió dinàmica d'aquesta xarxa lògica per tal de maximitzar-ne el rendiment i adaptar-la a les connexions ofertes. En aquest escenari, hi ha diversos mecanismes que poden afectar i modificar les característiques dels camins lògics (ample de banda, ruta, etc.). Aquests mecanismes inclouen els de balanceig de la càrrega (reassignació d'ample de banda i reencaminament) i els de restauració de fallades (ús de camins lògics de backup). Aquests dos mecanismes poden modificar la xarxa lògica i gestionar els recursos (ample de banda) dels enllaços físics. Per tant, existeix la necessitat de coordinar aquests mecanismes per evitar possibles interferències. La gestió de recursos convencional que fa ús de la xarxa lògica, recalcula periòdicament (per exemple cada hora o cada dia) tota la xarxa lògica d'una forma centralitzada. Això introdueix el problema que els reajustaments de la xarxa lògica no es realitzen en el moment en què realment hi ha problemes. D'altra banda també introdueix la necessitat de mantenir una visió centralitzada de tota la xarxa. En aquesta tesi, es proposa una arquitectura distribuïda basada en un sistema multi agent. L'objectiu principal d'aquesta arquitectura és realitzar de forma conjunta i coordinada la gestió de recursos a nivell de xarxa lògica, integrant els mecanismes de reajustament d'ample de banda amb els mecanismes de restauració preplanejada, inclosa la gestió de l'ample de banda reservada per a la restauració. Es proposa que aquesta gestió es porti a terme d'una forma contínua, no periòdica, actuant quan es detecta el problema (quan un camí lògic està congestionat, o sigui, quan està rebutjant peticions de connexió dels usuaris perquè està saturat) i d'una forma completament distribuïda, o sigui, sense mantenir una visió global de la xarxa. Així doncs, l'arquitectura proposada realitza petits rearranjaments a la xarxa lògica adaptant-la d'una forma contínua a la demanda dels usuaris. L'arquitectura proposada també té en consideració altres objectius com l'escalabilitat, la modularitat, la robustesa, la flexibilitat i la simplicitat. El sistema multi agent proposat està estructurat en dues capes d'agents: els agents de monitorització (M) i els de rendiment (P). Aquests agents estan situats en els diferents nodes de la xarxa: hi ha un agent P i diversos agents M a cada node; aquests últims subordinats als P. Per tant l'arquitectura proposada es pot veure com una jerarquia d'agents. Cada agent és responsable de monitoritzar i controlar els recursos als que està assignat. S'han realitzat diferents experiments utilitzant un simulador distribuït a nivell de connexió proposat per nosaltres mateixos. Els resultats mostren que l'arquitectura proposada és capaç de realitzar les tasques assignades de detecció de la congestió, reassignació dinàmica d'ample de banda i reencaminament d'una forma coordinada amb els mecanismes de restauració preplanejada i gestió de l'ample de banda reservat per la restauració. L'arquitectura distribuïda ofereix una escalabilitat i robustesa acceptables gràcies a la seva flexibilitat i modularitat.
Resumo:
The field site network (FSN) plays a central role in conducting joint research within all Assessing Large-scale Risks for biodiversity with tested Methods (ALARM) modules and provides a mechanism for integrating research on different topics in ALARM on the same site for measuring multiple impacts on biodiversity. The network covers most European climates and biogeographic regions, from Mediterranean through central European and boreal to subarctic. The project links databases with the European-wide field site network FSN, including geographic information system (GIS)-based information to characterise the test location for ALARM researchers for joint on-site research. Maps are provided in a standardised way and merged with other site-specific information. The application of GIS for these field sites and the information management promotes the use of the FSN for research and to disseminate the results. We conclude that ALARM FSN sites together with other research sites in Europe jointly could be used as a future backbone for research proposals
Resumo:
Three new polynuclear copper(II) complexes of 2-picolinic acid (Hpic), {[Cu-2(pic)(3)(H2O)]ClO4}(n) (1), {[Cu-2(pic)(3)(H2O)]BF4}(n) (2), and [Cu-2(pic)3(H2O)(2)(NO3)](n) (3), have been synthesized by reaction of the "metalloligand" [Cu-(pic)(2)] with the corresponding copper(II) salts. The compounds are characterized by single-crystal X-ray diffraction analyses and variable-temperature magnetic measurements. Compounds 1 and 2 are isomorphous and crystallize in the triclinic system with space group P (1) over bar, while 3 crystallizes in the monoclinic system with space group P2(1)/n. The structural analyses reveal that complexes 1 and 2 are constructed by "fish backbone" chains through syn-anti (equatorial-equatorial) carboxylate bridges, which are linked to one another by syn-anti (equatorial-axial) carboxylate bridges, giving rise to a rectangular grid-like two-dimensional net. Complex 3 is formed by alternating chains of syn-anti carboxylate-bridged copper(II) atoms, which are linked together by strong H bonds involving coordinated nitrate ions and water molecules and uncoordinated oxygen atoms from carboxylate groups. The different coordination ability of the anions along with their involvement in the H-bonding network seems to be responsible for the difference in the final polymeric structures. Variable-temperature (2-300 K) magnetic susceptibility measurement shows the presence of weak ferromagnetic coupling for all three complexes that have been fitted with a fish backbone model developed for 1 and 2 (J = 1.74 and 0.99 cm(-1); J' = 0.19 and 0.25 cm(-1), respectively) and an alternating chain model for 3 (J = 1.19 cm(-1) and J' = 1.19 cm(-1)).
Resumo:
This paper highlights the key role played by solubility in influencing gelation and demonstrates that many facets of the gelation process depend on this vital parameter. In particular, we relate thermal stability (T-gel) and minimum gelation concentration (MGC) values of small-molecule gelation in terms of the solubility and cooperative self-assembly of gelator building blocks. By employing a van't Hoff analysis of solubility data, determined from simple NMR measurements, we are able to generate T-calc values that reflect the calculated temperature for complete solubilization of the networked gelator. The concentration dependence of T-calc allows the previously difficult to rationalize "plateau-region" thermal stability values to be elucidated in terms of gelator molecular design. This is demonstrated for a family of four gelators with lysine units attached to each end of an aliphatic diamine, with different peripheral groups (Z or Bee) in different locations on the periphery of the molecule. By tuning the peripheral protecting groups of the gelators, the solubility of the system is modified, which in turn controls the saturation point of the system and hence controls the concentration at which network formation takes place. We report that the critical concentration (C-crit) of gelator incorporated into the solid-phase sample-spanning network within the gel is invariant of gelator structural design. However, because some systems have higher solubilities, they are less effective gelators and require the application of higher total concentrations to achieve gelation, hence shedding light on the role of the MGC parameter in gelation. Furthermore, gelator structural design also modulates the level of cooperative self-assembly through solubility effects, as determined by applying a cooperative binding model to NMR data. Finally, the effect of gelator chemical design on the spatial organization of the networked gelator was probed by small-angle neutron and X-ray scattering (SANS/SAXS) on the native gel, and a tentative self-assembly model was proposed.
Resumo:
Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.
Resumo:
Purpose - The purpose of this paper is to provide a quantitative multicriteria decision-making approach to knowledge management in construction entrepreneurship education by means of an analytic knowledge network process (KANP) Design/methodology/approach- The KANP approach in the study integrates a standard industrial classification with the analytic network process (ANP). For the construction entrepreneurship education, a decision-making model named KANP.CEEM is built to apply the KANP method in the evaluation of teaching cases to facilitate the case method, which is widely adopted in entrepreneurship education at business schools. Findings- The study finds that there are eight clusters and 178 nodes in the KANP.CEEM model, and experimental research on the evaluation of teaching cases discloses that the KANP method is effective in conducting knowledge management to the entrepreneurship education. Research limitations/implications- As an experimental research, this paper ignores the concordance between a selected standard classification and others, which perhaps limits the usefulness of KANP.CEEM model elsewhere. Practical implications- As the KANP.CEEM model is built based on the standard classification codes and the embedded ANP, it is thus expected that the model has a wide potential in evaluating knowledge-based teaching materials for any education purpose with a background from the construction industry, and can be used by both faculty and students. Originality/value- This paper fulfils a knowledge management need and offers a practical tool for an academic starting out on the development of knowledge-based teaching cases and other teaching materials or for a student going through the case studies and other learning materials.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.