964 resultados para Machine-tools
Resumo:
Managers in technology-intensive businesses need to make decisions in complex and dynamic environments. Many tools, frameworks and processes have been developed to support managers in these situations, leading to a proliferation of such approaches, with little consistency in terminology or theoretical foundation, and a lack of understanding of how such tools can be linked together to tackle management challenges in an integrated way. As a step towards addressing these issues, this paper proposes the concept of an integrated 'toolkit', incorporating generalized forms of three core technology management tools that support strategic planning (roadmapping, portfolio analysis and linked analysis grids). © 2006 World Scientific Publishing Company.
Resumo:
Two-dimensional photonic crystals in near infrared region were fabricated by using the focused ion beam ( FIB) method and the method of electron-beam lithography (EBL) combined with dry etching. Both methods can fabricate perfect crystals, the method of FIB is simple,the other is more complicated. It is shown that the material with the photonic crystal fabricated by FIB has no fluorescence,on the other hand, the small-lattice photonic crystal made by EBL combined with dry etching can enhance the extraction efficiency two folds, though the photonic crystal has some disorder. The mechanisms of the enhanced-emission and the absence of emission are also discussed.
Resumo:
This paper discusses the algorithm on the distance from a point and an infinite sub-space in high dimensional space With the development of Information Geometry([1]), the analysis tools of points distribution in high dimension space, as a measure of calculability, draw more attention of experts of pattern recognition. By the assistance of these tools, Geometrical properties of sets of samples in high-dimensional structures are studied, under guidance of the established properties and theorems in high-dimensional geometry.
Resumo:
We propose a new formally syntax-based method for statistical machine translation. Transductions between parsing trees are transformed into a problem of sequence tagging, which is then tackled by a search- based structured prediction method. This allows us to automatically acquire transla- tion knowledge from a parallel corpus without the need of complex linguistic parsing. This method can achieve compa- rable results with phrase-based method (like Pharaoh), however, only about ten percent number of translation table is used. Experiments show that the structured pre- diction approach for SMT is promising for its strong ability at combining words.
Resumo:
A wear mechanism map of uncoated high-speed steel (HSS) tools was constructed under the conditions of dry-drilling die-cast magnesium alloys. Three wear mechanisms appear in the map based on the microanalysis of drilled HSS tools by SEM, including adhesive wear, abrasive wear and diffusion wear. In the map, there exists a minor wear region which is called "safety zone". This wear mechanism map will be a good reference for choosing suitable drilling parameters when drilling die-cast magnesium alloys.
Resumo:
简要回顾和讨论了基于栅格数字高程模型(DEM)自动提取流域特征的原理和方法,介绍了一个新的基于ArcGIS开发的提取流域特征的工具Arc Hydro Tools,基于Arc Hydro Tools 提取流域特征包括5 个流程:DEM 的预处理、水流流向的确定、汇流栅格图的生成、河网的自动生成、子流域边界的划分。最后以贵州省内乌江流域为研究区进行了试验,试验结果表明:提取结果的精度在总体上是符合要求的,但在地势平坦区或人类活动干扰较大的地区,提取的结果与实际相差较大。从提取的效率和试验结果的精度两方面来看,基于Arc Hydro Tools 的流域特征自动提取是切实可行的。
Resumo:
The Jellybean Machine is a scalable MIMD concurrent processor consisting of special purpose RISC processors loosely coupled into a low latency network. I have developed an operating system to provide the supportive environment required to efficiently coordinate the collective power of the distributed processing elements. The system services are developed in detail, and may be of interest to other designers of fine grain, distributed memory processing networks.
Resumo:
The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test-bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.
Resumo:
The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M- Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 12 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms maximize both single thread performance and overall system throughput.
Resumo:
In this note, I propose two extensions to the Java virtual machine (or VM) to allow dynamic languages such as Dylan, Scheme and Smalltalk to be efficiently implemented on the VM. These extensions do not affect the performance of pure Java programs on the machine. The first extension allows for efficient encoding of dynamic data; the second allows for efficient encoding of language-specific computational elements.
Resumo:
The development of increasingly sophisticated and powerful computers in the last few decades has frequently stimulated comparisons between them and the human brain. Such comparisons will become more earnest as computers are applied more and more to tasks formerly associated with essentially human activities and capabilities. The expectation of a coming generation of "intelligent" computers and robots with sensory, motor and even "intellectual" skills comparable in quality to (and quantitatively surpassing) our own is becoming more widespread and is, I believe, leading to a new and potentially productive analytical science of "information processing". In no field has this new approach been so precisely formulated and so thoroughly exemplified as in the field of vision. As the dominant sensory modality of man, vision is one of the major keys to our mastery of the environment, to our understanding and control of the objects which surround us. If we wish to created robots capable of performing complex manipulative tasks in a changing environment, we must surely endow them with (among other things) adequate visual powers. How can we set about designing such flexible and adaptive robots? In designing them, can we make use of our rapidly growing knowledge of the human brain, and if so, how at the same time, can our experiences in designing artificial vision systems help us to understand how the brain analyzes visual information?
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
Psychometrics is a term within the statistical literature that encompasses the development and evaluation of psychological tests and measures, an area of increasing importance within applied psychology specifically and behavioral sciences. Confusion continues to exist regarding the fundamental tenets of psychometric evaluation and application of the appropriate statistical tests and procedures. The purpose of this paper is to highlight the main psychometric elements which need to be considered in both the development and evaluation of an instrument or tool used within the context of posttraumatic stress disorder (PTSD). The psychometric profile of a tool should also be considered in established tools used in screening PTSD. A “standard” for the application and reporting of psychometric data and approaches is emphasized, the goal of which is to ensure that the key psychometric parameters are considered in relation to the selection and use of PTSD screening tools.