956 resultados para self-healing materials


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce an in vitro diagnostic magnetic biosensing platform for immunoassay and nucleic acid detection. The platform has key characteristics for a point-of-use (POU) diagnostic: portability, low-power consumption, low cost, and multiplexing capability. As a demonstration of capabilities, we use this platform for the room temperature, amplification-free detection of a 31 bp DNA oligomer and interferon-gamma (a protein relevant for tuberculosis diagnosis). Reliable assay measurements down to 100 pM for the DNA and 1 pM for the protein are demonstrated. We introduce a novel "magnetic freezing" technique for baseline measurement elimination and to enable spatial multiplexing. We have created a general protocol for adapting integrated circuit (IC) sensors to any of hundreds of commercially available immunoassay kits and custom designed DNA sequences.

We also introduce a method for immunotherapy treatment of malignant gliomas. We utilize leukocytes internalized with immunostimulatory nanoparticle-oligonucleotide conjugates to localize and retain immune cells near the tumor site. As a proof-of-principle, we develop a novel cell imaging and incubation chamber for in vitro magnetic motility experiments. We use the apparatus to demonstrate the controlled movement of magnetically loaded THP-1 leukocytes.

Finally, we introduce an IC transmitter and power ampli er (PA) that utilizes electronic digital infrastructure, sensors, and actuators to self-heal and adapt to process, dynamic, and environmental variation. Traditional IC design has achieved incredible degrees of reliability by ensuring that billions of transistors on a single IC die are all simultaneously functional. Reliability becomes increasingly difficult as the size of a transistor shrinks. Self-healing can mitigate these variations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A propriedade de auto-cura, em redes inteligente de distribuição de energia elétrica, consiste em encontrar uma proposta de reconfiguração do sistema de distribuição com o objetivo de recuperar parcial ou totalmente o fornecimento de energia aos clientes da rede, na ocorrência de uma falha na rede que comprometa o fornecimento. A busca por uma solução satisfatória é um problema combinacional cuja complexidade está ligada ao tamanho da rede. Um método de busca exaustiva se torna um processo muito demorado e muitas vezes computacionalmente inviável. Para superar essa dificuldade, pode-se basear nas técnicas de geração de árvores de extensão mínima do grafo, representando a rede de distribuição. Porém, a maioria dos estudos encontrados nesta área são implementações centralizadas, onde proposta de reconfiguração é obtida por um sistema de supervisão central. Nesta dissertação, propõe-se uma implementação distribuída, onde cada chave da rede colabora na elaboração da proposta de reconfiguração. A solução descentralizada busca uma redução no tempo de reconfiguração da rede em caso de falhas simples ou múltiplas, aumentando assim a inteligência da rede. Para isso, o algoritmo distribuído GHS é utilizado como base na elaboração de uma solução de auto-cura a ser embarcada nos elementos processadores que compõem as chaves de comutação das linhas da rede inteligente de distribuição. A solução proposta é implementada utilizando robôs como unidades de processamento que se comunicam via uma mesma rede, constituindo assim um ambiente de processamento distribuído. Os diferentes estudos de casos testados mostram que, para redes inteligentes de distribuição compostas por um único alimentador, a solução proposta obteve sucesso na reconfiguração da rede, indiferentemente do número de falhas simultâneas. Na implementação proposta, o tempo de reconfiguração da rede não depende do número de linhas nela incluídas. A implementação apresentou resultados de custo de comunicação e tempo dentro dos limites teóricos estabelecidos pelo algoritmo GHS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

随着群组通信业务的普及,群组通信的相关安全研究也随之兴起。群组密码学的概念首先由 Desmedt 提出。与传统通信中的密码学方案相比,群组密码学具有很多优点,并已经成功地运用到视频点播和分布式系统等一些应用场景中。群签名,环签名,广播加密等密码学体制作为群组密码学的重要分支都得到了学术界的密切关注。本文的研究工作集中在群组密码学的一个重要组成部分:群组密钥管理,即多个(群组)用户在不安全的开放网络环境中通过一定的协议产生一个共享的会话密钥,为后面的通信提供各种安全性保护。 本文的研究工作主要包含两个方面,群组密钥分发协议和可证明安全的认证群组密钥协商协议。在对已有的群组密钥管理方案进行了大量的调查与分析后,我们在此基础上提出了一些有价值的研究成果。本文的主要成果包括: 1. 在第二章中,提出了一种高效的长期 self-healing 群组密钥分发方案。和已有方案相比,新方案的优势包括:(1)我们避免了使用指数运算, 而只是域上多项式的相关运算;(2)群组管理者广播消息次数比 Staddon 等人和Blundo 等人的方案少一次;(3)用户端存储私钥数目比 Staddon 等人的方案少将近一半;(4)新协议的安全性为无条件安全。 2. 在第三章中,构造了一个从一般群组密钥协商协议向基于口令认证的群组密钥协商协议转换的编译器。编译器的构造利用了对称加密体制,NM-CCA2 和 IK-CCA2 安全的公钥加密体制,以及 UF-CMA 安全的数字签名体制,从而使得编译器可以避免在线/离线字典攻击。 3. 在第四章中,提出了一个基于口令认证的群组密钥协商协议,使得网关在认证服务器的协助下和多个用户之间建立一个会话密钥,同时认证服务器不知道此会话密钥的任何信息(网关和认证服务器之间的信道是专用信道)。由于意识到口令的泄露往往是由于用户的不当使用造成的,所以,不同于已有的基于口令认证的门限密钥协商机制,我们的门限方案是在用户端实施。我们的门限方案要求把用户群组和认证服务器预先共享的口令(称之为群组口令)以秘密共享方式分散共享在群组用户之间,并且每个用户所存储口令的子秘密(share)值依然是一个易记忆的口令(称之为用户口令)。只有不少于 k 个用户一起才能恢复出群组口令。 4. 在第五章中,利用了Unified 模型(把用户长期私钥嵌入到密钥协商过程)构造了一个在 Strong Corruption 模型下可证明安全的强健的认证群组密钥协商协议。和已有类似方案相比,新方案所需要的签名数量明显减少,从而,计算复杂度和通信复杂度也随之降低。另外,新协议是在 Strong Corruption 模型下可证明安全,在此模型下的类似协议比较少。 5. 在第六章中,对 Desmedt 等人提出的 BD-II(树型)群组密钥协商协议做出改进。利用在树中各个节点上应用遮罩函数,我们把由于群组关系变化而对密钥更新所产生的影响限制在一个比较小的群组范围中,从而提高了协议在动态情况下运行的效率。新协议通过四个子算法:初始化,成员加入,成员撤销,子群组合并来分别应对群组密钥协商过程中所遇到的各种(动态)情况。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When solid material is removed in order to create flow channels in a load carrying structure, the strength of the structure decreases. On the other hand, a structure with channels is lighter and easier to transport as part of a vehicle. Here, we show that this trade off can be used for benefit, to design a vascular mechanical structure. When the total amount of solid is fixed and the sizes, shapes, and positions of the channels can vary, it is possible to morph the flow architecture such that it endows the mechanical structure with maximum strength. The result is a multifunctional structure that offers not only mechanical strength but also new capabilities necessary for volumetric functionalities such as self-healing and self-cooling. We illustrate the generation of such designs for strength and fluid flow for several classes of vasculatures: parallel channels, trees with one, two, and three bifurcation levels. The flow regime in every channel is laminar and fully developed. In each case, we found that it is possible to select not only the channel dimensions but also their positions such that the entire structure offers more strength and less flow resistance when the total volume (or weight) and the total channel volume are fixed. We show that the minimized peak stress is smaller when the channel volume (φ) is smaller and the vasculature is more complex, i.e., with more levels of bifurcation. Diminishing returns are reached in both directions, decreasing φ and increasing complexity. For example, when φ=0.02 the minimized peak stress of a design with one bifurcation level is only 0.2% greater than the peak stress in the optimized vascular design with two levels of bifurcation. © 2010 American Institute of Physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed applications are being deployed on ever-increasing scale and with ever-increasing functionality. Due to the accompanying increase in behavioural complexity, self-management abilities, such as self-healing, have become core requirements. A key challenge is the smooth embedding of such functionality into our systems. Natural distributed systems such as ant colonies have evolved highly efficient behaviour. These emergent systems achieve high scalability through the use of low complexity communication strategies and are highly robust through large-scale replication of simple, anonymous entities. Ways to engineer this fundamentally non-deterministic behaviour for use in distributed applications are being explored. An emergent, dynamic, cluster management scheme, which forms part of a hierarchical resource management architecture, is presented. Natural biological systems, which embed self-healing behaviour at several levels, have influenced the architecture. The resulting system is a simple, lightweight and highly robust platform on which cluster-based autonomic applications can be deployed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Methods: In this study we determined, for the first time, the ability of microorganisms to traverse microneedle-induced holes using two different in vitro models.

Results: When employing Silescol® membranes, the numbers of Candida albicans, Pseudomonas aeruginosa and Staphylococcus epidermidis crossing the membranes were an order of magnitude lower when the membranes were punctured by microneedles rather than a 21G hypodermic needle. Apart from the movement of C. albicans across hypodermic needle-punctured membranes, where 40.2% of the microbial load on control membranes permeated the barrier over 24 h, the numbers of permeating microorganisms was less than 5% of the original microbial load on control membranes. Experiments employing excised porcine skin and radiolabelled microorganisms showed that the numbers of microorganisms penetrating skin beyond the stratum corneum were approximately an order of magnitude greater than the numbers crossing Silescol® membranes in the corresponding experiments. Approximately 103?cfu of each microorganism adhered to hypodermic needles during insertion. The numbers of microorganisms adhering to MN arrays were an order of magnitude higher in each case.

Conclusion: We have shown here that microneedle puncture resulted in significantly less microbial penetration than did hypodermic needle puncture and that no microorganisms crossed the viable epidermis in microneedle—punctured skin, in contrast to needle-punctured skin. Given the antimicrobial properties of skin, it is, therefore, likely that application of microneedle arrays to skin in an appropriate manner would not cause either local or systemic infection in normal circumstances in immune-competent patients. In supporting widespread clinical use of microneedle-based delivery systems, appropriate animal studies are now needed to conclusively demonstrate this in vivo. Safety in patients will be enhanced by aseptic or sterile manufacture and by fabricating microneedles from self-disabling materials (e.g. dissolving or biodegradable polymers) to prevent inappropriate or accidental reuse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Semiconductor-sensitised photocatalysis is a well-established and growing area of research, innovation and commercialisation; the latter being mostly limited to the use of TiO2 as the semiconductor. Most of the work on semiconductor photocatalytic systems uses oxygen as the electron acceptor and explores a wide range of electron donors; such systems can be considered to be examples of oxidative photocatalysis, OP. OP underpins most current examples of commercial self-cleaning materials, such as: glass, tiles, concrete, paint and fabrics. OP, and its myriad of applications, have been reviewed extensively over the years both in this journal and elsewhere. However, the ability of TiO2, and other semiconductor sensitisers, to promote reductive photocatalysis, RP, especially of dyes, is significant and, although less well-known, is of growing importance. In such systems, the source of the electrons is some easily oxidised species, such as glycerol. One recent, significant example of a RP process is with respect to photocatalyst activity indicator inks. paiis, which provide a measure of the activity of a photocatalytic film under test via the rate of change of colour of the dye in the ink coating due to irreversible RP. In contrast, by incorporating the semiconductor sensitiser in the ink, rather than outside it, it is possible to create an effective UV dosimeter, based on RP, which can be used as a sun-burn warning indicator. In the above examples the dye is reduced irreversibly, but when the photocatalyst in an ink is used to reversibly photoreduce a dye, a novel, colourimetric oxygen-sensitive indicator ink can be created, which has commercial potential in the food packaging industry. Finally, if no dye is present in the ink, and the semiconductor photocatalyst-loaded ink film coats an easily reduced substrate, such as a metal oxide film, then it can be used to reduce the latter and so, for example, clean up tarnished steel. The above are examples of smart inks, i.e. inks that are active and provide either dynamic information (such as UV dose or O2 level) or a useful function (such as tarnish removal), and all work via a RP process and are reviewed here

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A resazurin (Rz) based photocatalyst activity indicator ink (paii) is used to test the activity of commercial self-cleaning materials. The semiconductor photocatalyst driven colour change of the ink is monitored indoors and outside using a simple mobile phone application that measures the RGB colour components of the digital image of the paii-covered, irradiated sample in real time. The results correlate directly with those generated using a traditional, lab-bound method of analysis (UV–vis spectrophotometry).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durant le dernier demi-siècle, la religion, comme tous les autres domaines d’activité humaine, a connu plusieurs transformations importantes. La diminution considérable, durant cette période, de la pratique religieuse institutionnalisée est accompagnée de l’apparition d’une multitude de nouvelles formes de spiritualités qui tentent de répondre aux besoins religieux de l’homme occidental. Parmi les multiples manifestations de ce genre, on découvre une spiritualité distincte, appelée néo-chamanisme ou chamanisme urbain, réunissant de nombreuses pratiques contemporaines qui se définissent comme chamaniques. Dans le cadre de ce mémoire, nous nous sommes concentrés sur l’étude du phénomène néo-chamanique de Michael Harner, ex-professeur et directeur du département d’anthropologie au Graduate Faculty of the New School for Social Research à New York, et fondateur de la Foundation for Shamanic Studies (à l’avenir : FSS). La présente recherche a pour but d’analyser le néo-chamanisme de Harner et de définir sa place parmi les nouvelles formes de religiosités. La théorie de la recomposition du religieux de Danièle Hervieu-Léger sert de cadre d’analyse pour cette nouvelle forme de spiritualité. Dans la première partie, nous traitons le phénomène religieux contemporain sous l’angle socioreligieux. Nous présentons un aperçu global des transformations que celui-ci subit en modernité sous l’impact de la sécularisation. À l’aide de la théorie des nouvelles formes religieuses de Danièle Hervieu-Léger, nous mettons en lumière les principales règles selon lesquelles une nouvelle configuration du religieux se déploie dans le contexte occidental contemporain. Dans la seconde partie, nous examinons le chamanisme traditionnel sous l’angle anthropologique. Nous faisons la lecture d’études classiques sur le chamanisme. Il apparaît que le chamanisme classique est communautaire, réservé à quelques personnes choisies par les esprits et que le processus laborieux d’initiation permettant d’accéder à cette fonction implique certains éléments spécifiques parmi lesquels on compte la maladie initiatique, la mort rituelle et la résurrection de la personne. Dans la troisième partie, nous examinons le néo-chamanisme de Harner. Nous rendons compte de son ouvrage majeur La voie spirituelle du chamane : Le Secret d'un sorcier indien d’Amérique du Nord et nous examinons l’expérience chamanique de la Foundation for Shamanic Studies (FSS). Il se dégage de cette étude que l’approche de Harner se veut l’expression d’un chamanisme fondamental et universel adapté à la société contemporaine. La pratique néo-chamanique de Harner se focalise sur le voyage chamanique et sur le contact avec le monde des esprits comme des éléments qui sont au cœur du chamanisme traditionnel. C’est une pratique axée principalement sur l’individu à des fins d’accomplissement de soi et d’(auto)guérison. Elle attire généralement des personnes dont le niveau de scolarité est élevé, disposées à payer pour les services fournis par la fondation. À la fin de notre étude, nous dégageons les conclusions générales suivantes : le néo-chamanisme de Harner s’éloigne de la tradition chamanique et la transforme en une spiritualité nouvelle adaptée aux besoins des Occidentaux; il reflète les transformations subies par le fait religieux pendant la période moderne; il s’adresse principalement à un public en quête de services spirituels ciblés et ponctuels et il favorise une forme de communalisation temporaire et intense; cependant, l’individuation de la pratique chamanique est porteuse de ses effets politiques et néocoloniaux.