962 resultados para full implementation
Resumo:
An organic integrated pixel consisting of an organic light-emitting diode driven by an organic thin-film field-effect transistor (OTFT) was fabricated by a full evaporation method oil a transparent glass substrate. The OTFT was designed as a top-gate Structure, and the insulator is composed of a double-layer polymer of Nylon 6 and Teflon to lower the operation voltage and the gate-leakage current, and improve the device stability. The field-effect mobility of the OTFT is more than 0.5 cm(2) V-1 s(-1), and the on/off ratio is larger than 10(3). The brightness of the pixel reached as large as 300 cd m(-2) at a driving current of 50 mu A.
Resumo:
IntroductionConventional polymers such as polyethyleneand polypropylene persistfor many years after landdisposal.Furthermore,plastics are often soiled byfood and other biological substances,making phys-ical recycling of those materials impractical andgenerally undesirable. In contrast,biodegradablepolymers disposed in bioactive environment are de-graded by the enzymatic action of microorganismssuch as bacteria,fungi,and algae.The worldwideconsumption of biodegradable polymers increasedfrom1.4×107kg in ...
Resumo:
A kind of full-biodegradable film material is discussed in this article. The film material is composed of starch, PVA, degradable polyesters(PHB, PHB-V, PCL) with built plasticizer, a cross-linking reinforcing agent and a wet strengthening agent. It contains a high percentage of starch, costs cheap and is excellent in weather fastness, temperature resistance and waterproof and it could be completely biodegraded. The present paper deals mainly with a new technical route using a new type of electromagnetic dynamic blow molding extruder and some effects on mechanical properties of the system.
Resumo:
Transglutaminase can catalyze the cross-linking reaction between soluble clotting protein molecules from the plasma for prevention of excess blood loss from a wound and obstructing micro-organisms from invading the wound in crustaceans. A novel transglutaminase (FcTG) gene was cloned from hemocytes of Chinese shrimp Fenneropenaeus chinensis by 3' and 5' rapid amplification of cDNA ends (RACE) PCR. The full-length cDNA consists of 2972 bp, encoding 757 amino acids with a calculated molecular mass of 84.96 kDa and a theoretical isoelectric point of 5.61. FcTG contains a typical transglutaminase-like homologue (TGc domain: E-value = 1.94e-38). Three catalytic sites (Cys-324, His-391 and Asp-414) are present in this domain. The deduced amino acid sequence of FcTG showed high identity with black tiger shrimp TG, kuruma shrimp TG and crayfish TG. Transcripts of FcTG mRNA were mainly detected in gill, lymphoid organ and hemocytes by RT-PCR. RNA in situ hybridization further confirmed that FcTG was constitutively expressed in hemocytes both in the circulatory system and lymphoid organ. The variation of mRNA transcription level in hemocytes and lymphoid organ following injection of killed bacteria or infection with white spot syndrome virus (WSSV) was quantified by RT-PCR. The up-regulated expression of FcTG in shrimp lymphoid organ following injection of bacteria indicates that it is inducible and might be associated with bacterial challenge. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
本文在分析几种常用的基于编码器测速方法的基础上,提出了一种高性能的自适应速度测量方法。该方法选择一个可变的时间周期和编码器脉冲数来测量单位时间内的编码器脉冲数,再通过简单的计算得到转速的测量值。数字信号处理器(DSP)芯片集成有正交脉冲编码电路,并且数据处理速度快,实时性强。本文中提出的方法在电机控制专用DSP芯片TMS320 LF2407A上进行了实现。实验研究表明,可以在提高低速时的测速准确度的同时,提高系统的响应时间。该方法已经在自主研发的全数字伺服驱动系统中得到了成功应用。
Resumo:
本文将S/T曲线速度规划的思想引入全数字伺服驱动系统中,通过提高速度的平滑性,特别是高速启、制动状态,来提高伺服系统的整体控制性能。基于定点数字信号处理器DSP芯片对提出的算法进行了实现。由于定点运算的限制,算法在实现中需要进行特殊的处理,本文对此进行了研究,并提出了一种余码补偿方案。实验研究表明,使用本文提出的方法可以提高系统运行的平稳性和控制的精确度。
Resumo:
充分利用非线性跟踪微分器获得高质量微分信号的特性,将跟踪微分器与传统的简单模糊PD控制器相结合,提出一种简单的高性能的改进的模糊PD控制器.该改进模糊控制器的最显著特点是对测量噪声的强鲁棒性和工程易实现性.数值仿真证明了其有效性和高效性.
Resumo:
以整车销售物流为背景,探讨多仓库带时窗约束的车辆路线安排问题的解决方法.提出了更为复杂的基于现实的细节性要求的多配送中心开路VRPTW问题模型,并将遗传算法产生部分解和评估完整解的优化解决方法和涌现交叉算子MX1引入到带时窗的多仓库VRP问题优化中,实现了快速全局优化.提出的开路混合配送方法有利于提高车辆满载率,降低回程空载率.同时实现了运输资源的优化配置,提高车辆利用率.计算机仿真实验证明了算法的可行性.
Resumo:
In order to carry out high-precision three-dimensional "integration" for the characteristics of the secondary seismic exploration for Biyang Depression, in the implementation process, through a combination of scientific research and production, summed up high-precision seismic acquisition, processing and interpretation technologies suitable for the eastern part of the old liberated areas, achieved the following results: 1. high-precision complex three-dimensional seismic exploration technology series suitable for shallow depression Biyang block group. To highlight the shallow seismic signal, apply goal-based observing system design, trail from the small panel to receive and protect the shallow treatment of a range of technologies; to explain the use of three-dimensional visualization and coherent combination of full-body three-dimensional fine interpretation identification of the 50-100 m below the unconformity surface and its formation of about 10 meters of the distribution of small faults and improve the small block and stratigraphic unconformity traps recognition. 2. high-precision series of three-dimensional seismic exploration technology suitable for deep depression Biyang low signal to noise ratio of information. Binding model using forward and lighting technology, wide-angle observation system covering the design, multiple suppression and raise the energy of deep seismic reflection processing and interpretation of detailed, comprehensive reservoir description, such as research and technology, identified a number of different types of traps. 3. high-precision seismic exploration technology series for the southern Biyang Depression high steep three-dimensional structure. The use of new technology of seismic wave scattering theory and high-precision velocity model based on pre-stack time migration and depth migration imaging of seismic data and other high-precision processing technology, in order to identify the southern steep slope of the local structure prediction and analysis of sandstone bedrock surface patterns provide a wealth of information.
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
Act2 is a highly concurrent programming language designed to exploit the processing power available from parallel computer architectures. The language supports advanced concepts in software engineering, providing high-level constructs suitable for implementing artificially-intelligent applications. Act2 is based on the Actor model of computation, consisting of virtual computational agents which communicate by message-passing. Act2 serves as a framework in which to integrate an actor language, a description and reasoning system, and a problem-solving and resource management system. This document describes issues in Act2's design and the implementation of an interpreter for the language.
Resumo:
The motion planning problem is of central importance to the fields of robotics, spatial planning, and automated design. In robotics we are interested in the automatic synthesis of robot motions, given high-level specifications of tasks and geometric models of the robot and obstacles. The Mover's problem is to find a continuous, collision-free path for a moving object through an environment containing obstacles. We present an implemented algorithm for the classical formulation of the three-dimensional Mover's problem: given an arbitrary rigid polyhedral moving object P with three translational and three rotational degrees of freedom, find a continuous, collision-free path taking P from some initial configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for the full six degree of freedom Movers' problem. The algorithm transforms the six degree of freedom planning problem into a point navigation problem in a six-dimensional configuration space (called C-Space). The C-Space obstacles, which characterize the physically unachievable configurations, are directly represented by six-dimensional manifolds whose boundaries are five dimensional C-surfaces. By characterizing these surfaces and their intersections, collision-free paths may be found by the closure of three operators which (i) slide along 5-dimensional intersections of level C-Space obstacles; (ii) slide along 1- to 4-dimensional intersections of level C-surfaces; and (iii) jump between 6 dimensional obstacles. Implementing the point navigation operators requires solving fundamental representational and algorithmic questions: we will derive new structural properties of the C-Space constraints and shoe how to construct and represent C-Surfaces and their intersection manifolds. A definition and new theoretical results are presented for a six-dimensional C-Space extension of the generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to the C-surface intersection manifolds. The representations and algorithms we develop impact many geometric planning problems, and extend to Cartesian manipulators with six degrees of freedom.
Resumo:
Coding for Success was published in 2007 and described how bar coding and similar technologies can be used to improve patient safety, reduce costs and improve efficiency. This review aims to outline progress made since 2007, and was recommended by the Health Select Committee in its 2009 report on Patient Safety.
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
Thomas, R. & Urquhart, C. NHS Wales e-library portal evaluation. (For Informing Healthcare Strategy implementation programme). Aberystwyth: Department of Information Studies, University of Wales Aberystwyth Follow-on to NHS Wales User Needs study Sponsorship: Informing Healthcare, NHS Wales