820 resultados para Computer-Aided Engineering
Resumo:
We present CHURNs, a method for providing freshness and authentication assurances to human users. In computer-to-computer protocols, it has long been accepted that assurances of freshness such as random nonces are required to prevent replay attacks. Typically, no such assurance of freshness is presented to a human in a human-and-computer protocol. A Computer–HUman Recognisable Nonce (CHURN) is a computer-aided random sequence that the human has a measure of control over and input into. Our approach overcomes limitations such as ‘humans cannot do random’ and that humans will follow the easiest path. Our findings show that CHURNs are significantly more random than values produced by unaided humans; that humans may be used as a second source of randomness, and we give measurements as to how much randomness can be gained from humans using our approach; and that our CHURN-generator makes the user feel more in control, thus removing the need for complete trust in devices and underlying protocols. We give an example of how a CHURN may be used to provide assurances of freshness and authentication for humans in a widely used protocol.
Resumo:
This thesis addressed issues that have prevented qualitative researchers from using thematic discovery algorithms. The central hypothesis evaluated whether allowing qualitative researchers to interact with thematic discovery algorithms and incorporate domain knowledge improved their ability to address research questions and trust the derived themes. Non-negative Matrix Factorisation and Latent Dirichlet Allocation find latent themes within document collections but these algorithms are rarely used, because qualitative researchers do not trust and cannot interact with the themes that are automatically generated. The research determined the types of interactivity that qualitative researchers require and then evaluated interactive algorithms that matched these requirements. Theoretical contributions included the articulation of design guidelines for interactive thematic discovery algorithms, the development of an Evaluation Model and a Conceptual Framework for Interactive Content Analysis.
Resumo:
The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.
Resumo:
When verifying or reverse-engineering digital circuits, one often wants to identify and understand small components in a larger system. A possible approach is to show that the sub-circuit under investigation is functionally equivalent to a reference implementation. In many cases, this task is difficult as one may not have full information about the mapping between input and output of the two circuits, or because the equivalence depends on settings of control inputs. We propose a template-based approach that automates this process. It extracts a functional description for a low-level combinational circuit by showing it to be equivalent to a reference implementation, while synthesizing an appropriate mapping of input and output signals and setting of control signals. The method relies on solving an exists/forall problem using an SMT solver, and on a pruning technique based on signature computation.
Resumo:
Diabetic macular edema (DME) is one of the most common causes of visual loss among diabetes mellitus patients. Early detection and successive treatment may improve the visual acuity. DME is mainly graded into non-clinically significant macular edema (NCSME) and clinically significant macular edema according to the location of hard exudates in the macula region. DME can be identified by manual examination of fundus images. It is laborious and resource intensive. Hence, in this work, automated grading of DME is proposed using higher-order spectra (HOS) of Radon transform projections of the fundus images. We have used third-order cumulants and bispectrum magnitude, in this work, as features, and compared their performance. They can capture subtle changes in the fundus image. Spectral regression discriminant analysis (SRDA) reduces feature dimension, and minimum redundancy maximum relevance method is used to rank the significant SRDA components. Ranked features are fed to various supervised classifiers, viz. Naive Bayes, AdaBoost and support vector machine, to discriminate No DME, NCSME and clinically significant macular edema classes. The performance of our system is evaluated using the publicly available MESSIDOR dataset (300 images) and also verified with a local dataset (300 images). Our results show that HOS cumulants and bispectrum magnitude obtained an average accuracy of 95.56 and 94.39 % for MESSIDOR dataset and 95.93 and 93.33 % for local dataset, respectively.
Resumo:
Biological sequences are an important part of global patenting, with unique challenges for their effective and equitable use in practice and in policy. Because their function can only be determined with computer-aided technology, the form in which sequences are disclosed matters greatly. Similarly, the scope of patent rights sought and granted requires computer readable data and tools for comparison. Critically, the primary data provided to the national patent offices and thence to the public, must be comprehensive, standardized, timely and meaningful. It is not yet. The proposed global Patent Sequence (PatSeq) Data platform can enable national and regional jurisdictions meet the desired standards.
Resumo:
The clutter-rejection properties of compact f.s.k. bursts with amplitude modulation are investigated. A procedure for computer-aided design of such signals is given. The loss of clutter performance on constraining the individual pulse amplitudes to be equal is evaluated.
Resumo:
The transmitted signal is assumed to consist of a close succession of rectangular pulses of equal width. A matched filter scheme is employed and a theory is developed for a computer-aided optimization of the envelope of monotone compact signals for maximum rejection of dense clutter of any given distribution in range. Specific results are presented and indeterminate cases are discussed.
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
For the first time, the impact of energy quantisation in single electron transistor (SET) island on the performance of hybrid complementary metal oxide semiconductor (CMOS)-SET transistor circuits has been studied. It has been shown through simple analytical models that energy quantisation primarily increases the Coulomb Blockade area and Coulomb Blockade oscillation periodicity of the SET device and thus influences the performance of hybrid CMOS-SET circuits. A novel computer aided design (CAD) framework has been developed for hybrid CMOS-SET co-simulation, which uses Monte Carlo (MC) simulator for SET devices along with conventional SPICE for metal oxide semiconductor devices. Using this co-simulation framework, the effects of energy quantisation have been studied for some hybrid circuits, namely, SETMOS, multiband voltage filter and multiple valued logic circuits. Although energy quantisation immensely deteriorates the performance of the hybrid circuits, it has been shown that the performance degradation because of energy quantisation can be compensated by properly tuning the bias current of the current-biased SET devices within the hybrid CMOS-SET circuits. Although this study is primarily done by exhaustive MC simulation, effort has also been put to develop first-order compact model for SET that includes energy quantisation effects. Finally, it has been demonstrated that one can predict the SET behaviour under energy quantisation with reasonable accuracy by slightly modifying the existing SET compact models that are valid for metallic devices having continuous energy states.
Resumo:
A computer code is developed as a part of an ongoing project on computer aided process modelling of forging operation, to simulate heat transfer in a die-billet system. The code developed on a stage-by-stage technique is based on an Alternating Direction Implicit scheme. The experimentally validated code is used to study the effect of process specifics such as preheat die temperature, machine ascent time, rate of deformation, and dwell time on the thermal characteristics in a batch coining operation where deformation is restricted to surface level only.
Resumo:
This paper presents an algorithm for generating the Interior Medial Axis Transform (iMAT) of 3D objects with free-form boundaries. The algorithm proposed uses the exact representation of the part and generates an approximate rational spline description of the iMAT. The algorithm generates the iMAT by a tracing technique that marches along the object's boundary. The level of approximation is controlled by the choice of the step size in the tracing procedure. Criteria based on distance and local curvature of boundary entities are used to identify the junction points and the search for these junction points is done in an efficient way. The algorithm works for multiply-connected objects as well. Results of the implementation are provided. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A new and efficient approach to construct a 3D wire-frame of an object from its orthographic projections is described. The input projections can be two or more and can include regular and complete auxiliary views. Each view may contain linear, circular and other conic sections. The output is a 3D wire-frame that is consistent with the input views. The approach can handle auxiliary views containing curved edges. This generality derives from a new technique to construct 3D vertices from the input 2D vertices (as opposed to matching coordinates that is prevalent in current art). 3D vertices are constructed by projecting the 2D vertices in a pair of views on the common line of the two views. The construction of 3D edges also does not require the addition of silhouette and tangential vertices and subsequently splitting edges in the views. The concepts of complete edges and n-tuples are introduced to obviate this need. Entities corresponding to the 3D edge in each view are first identified and the 3D edges are then constructed from the information available with the matching 2D edges. This allows the algorithm to handle conic sections that are not parallel to any of the viewing directions. The localization of effort in constructing 3D edges is the source of efficiency of the construction algorithm as it does not process all potential 3D edges. Working of the algorithm on typical drawings is illustrated. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.