84 resultados para simplicity
Resumo:
Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.
Resumo:
Process-aware information systems (PAISs) can be configured using a reference process model, which is typically obtained via expert interviews. Over time, however, contextual factors and system requirements may cause the operational process to start deviating from this reference model. While a reference model should ideally be updated to remain aligned with such changes, this is a costly and often neglected activity. We present a new process mining technique that automatically improves the reference model on the basis of the observed behavior as recorded in the event logs of a PAIS. We discuss how to balance the four basic quality dimensions for process mining (fitness, precision, simplicity and generalization) and a new dimension, namely the structural similarity between the reference model and the discovered model. We demonstrate the applicability of this technique using a real-life scenario from a Dutch municipality.
Resumo:
A switch-mode assisted linear amplifier (SMALA) combining a linear (Class B) and a switch-mode (Class D) amplifier is presented. The usual single hysteretic controlled half-bridge current dumping stage is replaced by two parallel buck converter stages, in a parallel voltage controlled topology. These operate independently: one buck converter sources current to assist the upper Class B output device, and a complementary converter sinks current to assist the lower device. This topology lends itself to a novel control approach of a dead-band at low power levels where neither class D amplifier assists, allowing the class B amplifier to supply the load without interference, ensuring high fidelity. A 20 W implementation demonstrates 85% efficiency, with distortion below 0.08% measured across the full audio bandwidth at 15 W. The class D amplifier begins assisting at 2 W, and below this value, the distortion was below 0.03%. Complete circuitry is given, showing the simplicity of the additional class D amplifier and its corresponding control circuitry.
Resumo:
Generating nano-sized materials of a controlled size and chemical composition is essential for the manufacturing of materials with enhanced properties on an industrial scale, as well as for research purposes, such as toxicological studies. Among the generation methods for airborne nanoparticles (also known as aerosolisation methods), liquid-phase techniques have been widely applied due to the simplicity of their use and their high particle production rate. The use of a collison nebulizer is one such technique, in which the atomisation takes place as a result of the liquid being sucked into the air stream and injected toward the inner walls of the nebulizer reservoir via nozzles, before the solution is dispersed. Despite the above-mentioned benefits, this method also falls victim to various sources of impurities (Knight and Petrucci 2003; W. LaFranchi, Knight et al. 2003). Since these impurities can affect the characterization of the generated nanoparticles, it is crucial to understand and minimize their effect.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
Conditions of bridges deteriorate with age, due to different critical factors including, changes in loading, fatigue, environmental effects and natural events. In order to rate a network of bridges, based on their structural condition, the condition of the components of a bridge and their effects on behaviour of the bridge should be reliably estimated. In this paper, a new method for quantifying the criticality and vulnerability of the components of the railway bridges in a network will be introduced. The type of structural analyses for identifying the criticality of the components for carrying train loads will be determined. In addition to that, the analytical methods for identifying the vulnerability of the components to natural events whose probability of occurrence is important, such as, flood, wind, earthquake and collision will be determined. In order to maintain the practicality of this method to be applied to a network of thousands of railway bridges, the simplicity of structural analysis has been taken into account. Demand by capacity ratios of the components at both safety and serviceability condition states as well as weighting factors used in current bridge management systems (BMS) are taken into consideration. It will be explained what types of information related to the structural condition of a bridge is required to be obtained, recorded and analysed. The authors of this paper will use this method in a new rating system introduced previously. Enhancing accuracy and reliability of evaluating and predicting the vulnerability of railway bridges to environmental effects and natural events will be the significant achievement of this research.
Resumo:
Forward genetic screens have identified numerous genes involved in development and metabolism, and remain a cornerstone of biological research. However, to locate a causal mutation, the practice of crossing to a polymorphic background to generate a mapping population can be problematic if the mutant phenotype is difficult to recognize in the hybrid F2 progeny, or dependent on parental specific traits. Here in a screen for leaf hyponasty mutants, we have performed a single backcross of an Ethane Methyl Sulphonate (EMS) generated hyponastic mutant to its parent. Whole genome deep sequencing of a bulked homozygous F2 population and analysis via the Next Generation EMS mutation mapping pipeline (NGM) unambiguously determined the causal mutation to be a single nucleotide polymorphisim (SNP) residing in HASTY, a previously characterized gene involved in microRNA biogenesis. We have evaluated the feasibility of this backcross approach using three additional SNP mapping pipelines; SHOREmap, the GATK pipeline, and the samtools pipeline. Although there was variance in the identification of EMS SNPs, all returned the same outcome in clearly identifying the causal mutation in HASTY. The simplicity of performing a single parental backcross and genome sequencing a small pool of segregating mutants has great promise for identifying mutations that may be difficult to map using conventional approaches.
Resumo:
Interest in insect small RNA viruses (SRVs) has grown slowly but steadily. A number of new viruses have been analyzed at the sequence level, adding to our knowledge of their diversity at the level of both individual virus species and families. In particular, a number of possible new virus families have emerged. This research has largely been driven by interest in their potential for pest control, as well as in their importance as the causal agents of disease in beneficial arthropods. At the same time, research into known viruses has made valuable contributions to our understanding of an emerging new field of central importance to molecular biology-the existence of RNA-based gene silencing, developmental control, and adaptive immune systems in eukaryotes. Subject to RNA-based adaptive immune responses in their hosts, viruses have evolved a variety of genes encoding proteins capable of suppressing the immune response. Such genes were first identified in plant viruses, but the first examples known from animal viruses were identified in insect RNA viruses. This chapter will address the diversity of insect SRVs, and attempts to harness their simplicity in the engineering of transgenic plants expressing viruses for resistance to insect pests. We also describe RNA interference and antiviral pathways identified in plants and animals, how they have led viruses to evolve genes capable of suppressing such adaptive immunity, and the problems presented by these pathways for the strategy of expressing viruses in transgenic plants. Approaches for countering these problems are also discussed. © 2006 Elsevier Inc. All rights reserved.
Resumo:
Diagnostics of rolling element bearings involves a combination of different techniques of signal enhancing and analysis. The most common procedure presents a first step of order tracking and synchronous averaging, able to remove the undesired components, synchronous with the shaft harmonics, from the signal, and a final step of envelope analysis to obtain the squared envelope spectrum. This indicator has been studied thoroughly, and statistically based criteria have been obtained, in order to identify damaged bearings. The statistical thresholds are valid only if all the deterministic components in the signal have been removed. Unfortunately, in various industrial applications, characterized by heterogeneous vibration sources, the first step of synchronous averaging is not sufficient to eliminate completely the deterministic components and an additional step of pre-whitening is needed before the envelope analysis. Different techniques have been proposed in the past with this aim: The most widely spread are linear prediction filters and spectral kurtosis. Recently, a new technique for pre-whitening has been proposed, based on cepstral analysis: the so-called cepstrum pre-whitening. Owing to its low computational requirements and its simplicity, it seems a good candidate to perform the intermediate pre-whitening step in an automatic damage recognition algorithm. In this paper, the effectiveness of the new technique will be tested on the data measured on a full-scale industrial bearing test-rig, able to reproduce the harsh conditions of operation. A benchmark comparison with the traditional pre-whitening techniques will be made, as a final step for the verification of the potentiality of the cepstrum pre-whitening.
Resumo:
Efficient and effective feature detection and representation is an important consideration when processing videos, and a large number of applications such as motion analysis, 3D scene understanding, tracking etc. depend on this. Amongst several feature description methods, local features are becoming increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational complexity, their performance is still too limited for real world applications. Furthermore, rapid increases in the uptake of mobile devices has increased the demand for algorithms that can run with reduced memory and computational requirements. In this paper we propose a semi binary based feature detectordescriptor based on the BRISK detector, which can detect and represent videos with significantly reduced computational requirements, while achieving comparable performance to the state of the art spatio-temporal feature descriptors. First, the BRISK feature detector is applied on a frame by frame basis to detect interest points, then the detected key points are compared against consecutive frames for significant motion. Key points with significant motion are encoded with the BRISK descriptor in the spatial domain and Motion Boundary Histogram in the temporal domain. This descriptor is not only lightweight but also has lower memory requirements because of the binary nature of the BRISK descriptor, allowing the possibility of applications using hand held devices.We evaluate the combination of detectordescriptor performance in the context of action classification with a standard, popular bag-of-features with SVM framework. Experiments are carried out on two popular datasets with varying complexity and we demonstrate comparable performance with other descriptors with reduced computational complexity.
Resumo:
The traditional structural design procedure, especially for the large-scale and complex structures, is time consuming and inefficient. This is due primarily to the fact that the traditional design takes the second-order effects indirectly by virtue of design specifications for every member instead of system analysis for a whole structure. Consequently, the complicated and tedious design procedures are inevitably necessary to consider the second-order effects for the member level in design specification. They are twofold in general: 1) Flexural buckling due to P-d effect, i.e. effective length. 2) Sway effect due to P-D effect, i.e. magnification factor. In this study, a new system design concept based on the second-order elastic analysis is presented, in which the second-order effects are taken into account directly in the system analysis, and also to avoid the tedious member-by-member stability check. The plastic design on the basis of this integrated method of direct approach is ignored in this paper for simplicity and clarity, as the only emphasis is placed on the difference between the second-order elastic limit-state design and present system design approach. A practical design example, a 57m-span dome steel skylight structure, is used to demonstrate the efficiency and effectiveness of the proposed approach. This skylight structure is also designed by the traditional design approach BS5950-2000 for comparison on which the emphasis of aforementioned P-d and P-D effects is placed.
Resumo:
Paint Spray is developed as a direct sampling ionisation method for mass spectrometric analysis of additives in polymer-based surface coatings. The technique simply involves applying an external high voltage (5 kV) to the wetted sample placed in front of the mass spectrometer inlet and represents a much simpler ionisation technique compared to those currently available. The capabilities of Paint Spray are demonstrated herein with the detection of four commercially available hindered amine light stabilisers; TINUVIN® 770, TINUVIN® 292, TINUVIN® 123 and TINUVIN® 152 directly from thermoset polyester-based coil coatings. Paint Spray requires no sample preparation or pre-treatment and combined with its simplicity - requiring no specialised equipment - makes it ideal for use by non-specialists. The application of Paint Spray for industrial use has significant potential as sample collection from a coil coating production line and Paint Spray ionisation could enable fast quality control screening at high sensitivity.
Resumo:
Most previous work on unconditionally secure multiparty computation has focused on computing over a finite field (or ring). Multiparty computation over other algebraic structures has not received much attention, but is an interesting topic whose study may provide new and improved tools for certain applications. At CRYPTO 2007, Desmedt et al introduced a construction for a passive-secure multiparty multiplication protocol for black-box groups, reducing it to a certain graph coloring problem, leaving as an open problem to achieve security against active attacks. We present the first n-party protocol for unconditionally secure multiparty computation over a black-box group which is secure under an active attack model, tolerating any adversary structure Δ satisfying the Q 3 property (in which no union of three subsets from Δ covers the whole player set), which is known to be necessary for achieving security in the active setting. Our protocol uses Maurer’s Verifiable Secret Sharing (VSS) but preserves the essential simplicity of the graph-based approach of Desmedt et al, which avoids each shareholder having to rerun the full VSS protocol after each local computation. A corollary of our result is a new active-secure protocol for general multiparty computation of an arbitrary Boolean circuit.
Resumo:
Numerous research studies have evaluated whether distance learning is a viable alternative to traditional learning methods. These studies have generally made use of cross-sectional surveys for collecting data, comparing distance to traditional learners with intent to validate the former as a viable educational tool. Inherent fundamental differences between traditional and distance learning pedagogies, however, reduce the reliability of these comparative studies and constrain the validity of analyses resulting from this analytical approach. This article presents the results of a research project undertaken to analyze expectations and experiences of distance learners with their degree programs. Students were given surveys designed to examine factors expected to affect their overall value assessment of their distance learning program. Multivariate statistical analyses were used to analyze the correlations among variables of interest to support hypothesized relationships among them. Focusing on distance learners overcomes some of the limitations with assessments that compare off- and on-campus student experiences. Evaluation and modeling of distance learner responses on perceived value for money of the distance education they received indicate that the two most important influences are course communication requirements, which had a negative effect, and course logistical simplicity, which revealed a positive effect. Combined, these two factors accounted for approximately 47% of the variability in perceived value for money of the educational program of sampled students. A detailed focus on comparing expectations with outcomes of distance learners complements the existing literature dominated by comparative studies of distance and nondistance learners.
Resumo:
Invasion of extracellular matrices is crucial to a number of physiological and pathophysiological states, including tumor cell metastasis, arthritis, embryo implantation, wound healing, and early development. To isolate invasion from the additional complexities of these scenarios a number of in vitro invasion assays have been developed over the years. Early studies employed intact tissues, like denuded amniotic membrane (1) or embryonic chick heart fragments (2), however recently, purified matrix components or complex matrix extracts have been used to provide more uniform and often more rapid analyses (for examples, see the following integrin studies). Of course, the more holistic view of invasion offered in the earlier assays is valuable and cannot be fully reproduced in these more rapid assays, but advantages of reproducibility among replicates, ease of preparation and analysis, and overall high throughput favor the newer assays. In this chapter, we will focus on providing detailed protocols for Matrigel-based assays (Matrigel=reconstituted basement membrane; reviewed in ref. (3)). Matrigel is an extract from the transplantable Engelbreth-Holm-Swarm murine sarcoma that deposits a multilammelar basement membrane. Matrigel is available commercially (Becton Dickinson, Bedford, MA), and can be manipulated as a liquid at 4°C into a variety of different formats. Alternatively, cell culture inserts precoated with Matrigel can be purchased for even greater simplicity.