96 resultados para Tokens
Resumo:
Recognising that charitable behaviour can be motivated by public recognition and emotional satisfaction, not-for-profit organisations have developed strategies that leverage self-interest over altruism by facilitating individuals to donate conspicuously. Initially developed as novel marketing programs to increase donation income, such conspicuous tokens of recognition are being recognised as important value propositions to nurture donor relationships. Despite this, there is little empirical evidence that identifies when donations can be increased through conspicuous recognition. Furthermore, social media’s growing popularity for self-expression, as well as the increasing use of technology in donor relationship management strategies, makes an examination of virtual conspicuous tokens of recognition in relation to what value donors seek particularly insightful. Therefore, this research examined the impact of experiential donor value and virtual conspicuous tokens of recognition on blood donor intentions. Using online survey data from 186 Australian blood donors, results show that in fact emotional value is a stronger predictor of intentions to donate blood than altruistic value, while social value is the strongest predictor of intentions if provided with recognition. Clear linkages between dimensions of donor value (altruistic, emotional and social) and conspicuous donation behaviour (CDB) were identified. The findings provide valuable insights into the use of conspicuous donation tokens of recognition on social media, and contribute to our understanding into the under-researched areas of donor value and CDB.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real-time, using corners as object tokens. Corners are detected using the Harris corner detector, and local image-plane constraints are employed to solve the correspondence problem. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. Tracking is performed without the use of any 3-dimensional motion model. The technique is novel in that, unlike traditional feature-tracking algorithms where feature detection and tracking is carried out over the entire image-plane, here it is restricted to those areas most likely to contain-meaningful image structure. Two distinct types of instantiation regions are identified, these being the “focus-of-expansion” region and “border” regions of the image-plane. The size and location of these regions are defined from a combination of odometry information and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Implementation of the algorithm using T800 Transputers has shown that near-linear speedups are achievable, and that real-time operation is possible (half-video rate has been achieved using 30 processing elements).
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.
Resumo:
Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.
Resumo:
This paper presents a new active learning query strategy for information extraction, called Domain Knowledge Informativeness (DKI). Active learning is often used to reduce the amount of annotation effort required to obtain training data for machine learning algorithms. A key component of an active learning approach is the query strategy, which is used to iteratively select samples for annotation. Knowledge resources have been used in information extraction as a means to derive additional features for sample representation. DKI is, however, the first query strategy that exploits such resources to inform sample selection. To evaluate the merits of DKI, in particular with respect to the reduction in annotation effort that the new query strategy allows to achieve, we conduct a comprehensive empirical comparison of active learning query strategies for information extraction within the clinical domain. The clinical domain was chosen for this work because of the availability of extensive structured knowledge resources which have often been exploited for feature generation. In addition, the clinical domain offers a compelling use case for active learning because of the necessary high costs and hurdles associated with obtaining annotations in this domain. Our experimental findings demonstrated that 1) amongst existing query strategies, the ones based on the classification model’s confidence are a better choice for clinical data as they perform equally well with a much lighter computational load, and 2) significant reductions in annotation effort are achievable by exploiting knowledge resources within active learning query strategies, with up to 14% less tokens and concepts to manually annotate than with state-of-the-art query strategies.
Resumo:
An interactive graphics package for modeling with Petri Nets has been implemented. It uses the VT-11 graphics terminal supported on the PDP-11/35 computer to draw, execute, analyze, edit and redraw a Petri Net. Each of the above mentioned tasks can be performed by selecting appropriate items from a menu displayed on the screen. Petri Nets with a reasonably large number of nodes can be created and analyzed using this package. The number of nodes supported may be increased by making simple changes in the program. Being interactive, the program seeks information from the user after displaying appropriate messages on the terminal. After completing the Petri Net, it may be executed step by step and the changes in the number of tokens may be observed on the screen, at each place. Some properties of Petri Nets like safety, boundedness, conservation and redundancy can be checked using this package. This package can be used very effectively for modeling asynchronous (concurrent) systems with Petri Nets and simulating the model by “graphical execution.”
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
A filosofia viveu um tempo de luminosidade crua em que havia contentamento (pelo menos entre os filósofos dignos de serem estudados) com a postulação das condições de possibilidade que cabiam no horizonte que essa luz podia, então, iluminar. Tudo que escapasse desse horizonte era obscuridade, irracionalidade, mera especulação e, pior de todas as ofensas: metafísica. Mas, alguma filosofia do séc. XX encontrou uma outra distribuição de luminosidade que permitiu um pensamento em claro-escuro, em tonalidades nuançadas em que a nitidez absoluta dos contornos se viu fluidificar, em que as figuras puras e sólidas se mostraram como híbridas, nebulosas derramadas, em que os objetos entraram na história e os homens se misturaram com a natureza, em que os movimentos do mundo e as imagens na consciência saíram da dualidade das qualidades primárias e secundárias e se aventuraram em novas perspectivas (aventuras que ainda atravessam desde a fenomenologia até o cinema). É neste cenário de novas distribuições que reaparece a questão da individuação apontando para uma outra concepção do indivíduo, não mais substancial e suporte de qualidades, não mais ancorado nos pares matéria e forma, atual e potencial. Tais pares se revelam insuficientes por não darem conta das impurezas que vêm à tona e das surpreendentes possibilidades inventadas (simbioses, alianças, infecções) e não somente atualizadas a partir de um potencial (filiação, reprodução). Nesse novo modo de compor, o atributo não mais se remete a um predicado qualidade, mas ao acontecimento, não mais às possibilidades latentes, mas à potência a ser inventada nas composições, nas relações constituintes dos diferentes modos de existência. Simondon foi o primeiro filósofo a levar em conta, de modo específico, o indivíduo se inventando em composição, daí ter renovado a questão da individuação e transformado o estatuto da relação. O ser é relação, tal é, com Simondon, a proposição que passa a figurar no centro do pensamento da individuação. Mas se, por um lado, havia essa promoção da relação, por outro lado, parecia não haver a liberação dos modos que, enfim, remetiam a uma natureza dos possíveis. Mesmo não funcionando como princípio, essa natureza parecia capturar os modos num potencial, de tal maneira que um novo humanismo, tão sufocante quanto qualquer outro, acompanhava toda a produção de Simondon. Não à toa, sua narrativa dos diferentes modos se fecha no encontro de uma unidade capaz de suportar o multirealismo dos híbridos que surgiam por toda parte. As metas, os sentidos do devir que povoam a obra de Simondon não seriam os ecos de uma velha moral da pureza, da luminosidade branca?Para escapar desse rebatimento da aventura dos modos em tipos privilegiados de relação que levavam a restaurar a unidade perdida, era preciso se lançar na inocência do processo das inumeráveis atividades de um tecido sem base, jogo de linhas impuras em cruzamentos inventados a cada momento. Nesse sentido: era o pensamento especulativo, expresso em sua própria escrita em zig-zag, de Whitehead já um antídoto aos possíveis rebatimentos da nova filosofia da individuação num mundo por demais reconhecido? É esse o espírito da composição nesse trabalho: a individuação simondoneana com a insistência em se entregar à aventura inocente dos processos se fazendo que se encontra em Whitehead (e nos muitos aliados que foram convocados para que outras músicas se façam ouvir).
Resumo:
Investigou-se pelo presente estudo se a concepção presente na Teoria de Replicadores, expressa através do conceito de meme (DAWKINS, 1979), poderia ser um modelo compatível para explicar a propagação de memes no substrato das mídias sociais. No âmbito dos estudos locais, Recuero (2006) sugeriu uma transdução desse modelo, baseando-se nas concepções de Dawkins (1979). Refletindo sobre o posicionamento epistemológico de Recuero (2006), o presente trabalho, baseando-se em Dennett (1995), Blackmore (2002) e Tyler (2011b; 2013b), procedeu às instâncias de Análise Conceitual e Composicional dessa transdução. A partir do conceito de memeplexo (BLACKMORE, 2002), esta pesquisa de base linguística (HALLIDAY, 1987) entende os memes, no substrato das mídias digitais/sociais, como práticas de produção e distribuição linguístico-midiáticas, propaladas a partir de diversas unidades de propagação e das relações criadas pelos internautas nesse processo de transmissão. Investigando tais relações, a partir da instância de Análise Relacional, propõe-se examinar duas unidades de propagação. Expressões meméticas (Que deselegante e #Tenso) e imagens meméticas (oriundas do fenômeno memético Nana em desastres). Integram este estudo dois corpora de expressões meméticas (5275 postagens oriundas ou redirecionadas para o Twitter.com total de 83.655 palavras/tokens) e um corpus bilíngue (Português/Inglês) de imagens meméticas (um total de 134 imagens oriundas do Tumblr.com e Facebook.com). Para analisar os corpora de expressões meméticas utilizou-se a metodologia de Linguística de Corpus (BERBER-SARDINHA, 2004; SHEPHERD, 2009; SOUZA JÚNIOR, 2012, 2013b, 2013c). Para a análise do corpus multimodal de imagens meméticas, utilizou-se a metodologia que chamamos de Análise Propagatória. Objetivamos verificar se essas unidades de propagação e as práticas linguístico-midiáticas que estas transmitem, evoluiriam somente devido a aspectos memético-midiáticos, conforme Recuero (2006) apontara, e com padrão de propagação internalista (DAWKINS, 1979; 1982). Após análise dos dados, revelou-se que, ao nível do propósito, os fenômenos locais investigados não evoluíram por padrão internalista (ou homogêneo) de propagação. Tais padrões revelam ser de natureza externalista (ou heterogênea). Ademais, constatou-se que princípios constitutivos meméticos de evolução como os de fecundidade, longevidade (DAWKINS 1979; 1982) e o de design (DENNETT, 1995), junto com o princípio midiático de evolução de alcance (RECUERO, 2006) mantiveram-se presentes com alto grau de influencia nas propagações de natureza externalista. Por outro lado, o princípio memético da fidelidade (DAWKINS, 1979; 1982) foi o que menos influenciou esses padrões de propagação. Neutralizando a fidelidade, e impulsionados pelo princípio de design, destacaram-se nesse processo evolutivo os princípios linguísticos sistematizadores revelados por este estudo. Isto é: o princípio da funcionalidade (memes evoluem porque podem indicar propósitos diferentes) e o princípio do alcance linguístico (memes podem ser direcionados a itens animados/ inanimados; para internautas em idioma nativo/ estrangeiro)
Resumo:
RFID is a technology that enables the automated capture of observations of uniquely identified physical objects as they move through supply chains. Discovery Services provide links to repositories that have traceability information about specific physical objects. Each supply chain party publishes records to a Discovery Service to create such links and also specifies access control policies to restrict who has visibility of link information, since it is commercially sensitive and could reveal inventory levels, flow patterns, trading relationships, etc. The requirement of being able to share information on a need-to-know basis, e.g. within the specific chain of custody of an individual object, poses a particular challenge for authorization and access control, because in many supply chain situations the information owner might not have sufficient knowledge about all the companies who should be authorized to view the information, because the path taken by an individual physical object only emerges over time, rather than being fully pre-determined at the time of manufacture. This led us to consider novel approaches to delegate trust and to control access to information. This paper presents an assessment of visibility restriction mechanisms for Discovery Services capable of handling emergent object paths. We compare three approaches: enumerated access control (EAC), chain-of-communication tokens (CCT), and chain-of-trust assertions (CTA). A cost model was developed to estimate the additional cost of restricting visibility in a baseline traceability system and the estimates were used to compare the approaches and to discuss the trade-offs. © 2012 IEEE.
Resumo:
Service-Oriented Architecture (SOA) and Web Services (WS) offer advanced flexibility and interoperability capabilities. However they imply significant performance overheads that need to be carefully considered. Supply Chain Management (SCM) and Traceability systems are an interesting domain for the use of WS technologies that are usually deemed to be too complex and unnecessary in practical applications, especially regarding security. This paper presents an externalized security architecture that uses the eXtensible Access Control Markup Language (XACML) authorization standard to enforce visibility restrictions on trace-ability data in a supply chain where multiple companies collaborate; the performance overheads are assessed by comparing 'raw' authorization implementations - Access Control Lists, Tokens, and RDF Assertions - with their XACML-equivalents. © 2012 IEEE.
Resumo:
在模糊Petri网应用研究中,普遍存在模糊token由专家直接给出或主观假定的问题。基于这种情况,提出了通过模糊统计法来获得库所的模糊token,为成功应用模糊Petri网理论创造了条件。给出了计算模糊token的通用形式化算法。实例论证了模糊统计法在求取模糊token时的可行性与有效性。
Resumo:
随着电子技术和计算机技术的不断发展,工业生产过程的控制系统正在向着智能化、数字化和网络化的方向发展。传统的集散控制方式和计算机分层控制方式已经开始让位于智能终端与网络结合的总线网络控制方式。当今,在工厂中过程控制环境下的分布式自动化系统变得越来越复杂,尤其系统内部的各设备之间需要快速交换大量的信息,以便实现对被控系统更为精确的控制和提供一些辅助的评价函数。这就意味着要不断增加带宽和提高通信速率以满足网络通信的需要。在现有的多种可利用网络设备中,CAN总线以其清晰的定义、极高的可靠性及其独特的设计,被认为是最能有效地解决这一问题的途径之一。而且市场上基于通信技术的产品中,就实时性考虑,由于CAN总线采用的非表意性的通信方式,因此其结构更为简单,实时性更好。基于此背景,我们以CAN总线作为通信媒介,将分布于各控制现场的传感器、执行器和控制器有序地连接起来,构成了一个基于CAN总线的分布式局域网络控制系统。本文首先介绍了基于CAN总线的分布式数据采集与控制系统的总体结构。然后从硬件方面描述了基于CAN总线的通信协议转换单元、数据采集单元和输出控制单元的功能、硬件配置及各单元功能的具体实现过程,给出了各单元的性能指标。软件方面,以C语言作为平台,开发了基于CAN总线的上位计算机管理与监控软件,实现了对整个网络设备的系统管理和系统控制功能。对于该总线系统,作者运用了PID控制和模糊控制算法实现了对水箱液位的控制,达到了理想的效果。基于CAN总线的控制系统很好地解决了集散控制系统难以解决的难题,模糊控制的应用能很好地把总线控制系统应用到具有非线性、大时滞和难于获得精确模型的控制系统中。
Resumo:
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C1VC2 syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T). The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task
Resumo:
This report shows how knowledge about the visual world can be built into a shape representation in the form of a descriptive vocabulary making explicit the important geometrical relationships comprising objects' shapes. Two computational tools are offered: (1) Shapestokens are placed on a Scale-Space Blackboard, (2) Dimensionality-reduction captures deformation classes in configurations of tokens. Knowledge lies in the token types and deformation classes tailored to the constraints and regularities ofparticular shape worlds. A hierarchical shape vocabulary has been implemented supporting several later visual tasks in the two-dimensional shape domain of the dorsal fins of fishes.