921 resultados para Built-in test
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information
Resumo:
Securing IT infrastructures of our modern lives is a challenging task because of their increasing complexity, scale and agile nature. Monolithic approaches such as using stand-alone firewalls and IDS devices for protecting the perimeter cannot cope with complex malwares and multistep attacks. Collaborative security emerges as a promising approach. But, research results in collaborative security are not mature, yet, and they require continuous evaluation and testing. In this work, we present CIDE, a Collaborative Intrusion Detection Extension for the network security simulation platform ( NeSSi 2 ). Built-in functionalities include dynamic group formation based on node preferences, group-internal communication, group management and an approach for handling the infection process for malware-based attacks. The CIDE simulation environment provides functionalities for easy implementation of collaborating nodes in large-scale setups. We evaluate the group communication mechanism on the one hand and provide a case study and evaluate our collaborative security evaluation platform in a signature exchange scenario on the other.
Resumo:
A technologically innovative study was undertaken across two suburbs in Brisbane, Australia, to assess socioeconomic differences in women's use of the local environment for work, recreation, and physical activity. Mothers from high and low socioeconomic suburbs were instructed to continue with usual daily routines, and to use mobile phone applications (Facebook Places, Twitter, and Foursquare) on their mobile phones to ‘check-in’ at each location and destination they reached during a one-week period. These smartphone applications are able to track travel logistics via built-in geographical information systems (GIS), which record participants’ points of latitude and longitude at each destination they reach. Location data were downloaded to Google Earth and excel for analysis. Women provided additional qualitative data via text regarding the reasons and social contexts of their travel. We analysed 2183 ‘check-ins’ for 54 women in this pilot study to gain quantitative, qualitative, and spatial data on human-environment interactions. Data was gathered on distances travelled, mode of transport, reason for travel, social context of travel, and categorised in terms of physical activity type – walking, running, sports, gym, cycling, or playing in the park. We found that the women in both suburbs had similar daily routines with the exception of physical activity. We identified 15% of ‘check-ins’ in the lower socioeconomic group as qualifying for the physical activity category, compared with 23% in the higher socioeconomic group. This was explained by more daily walking for transport (1.7kms to 0.2kms) and less car travel each week (28.km to 48.4kms) in the higher socioeconomic suburb. We ascertained insights regarding the socio-cultural influences on these differences via additional qualitative data. We discuss the benefits and limitations of using new technologies and Google Earth with implications for informing future physical and social aspects of urban design, and health promotion in socioeconomically diverse cities.
Resumo:
This paper looks at the accuracy of using the built-in camera of smart phones and free software as an economical way to quantify and analyse light exposure by producing luminance maps from High Dynamic Range (HDR) images. HDR images were captured with an Apple iPhone 4S to capture a wide variation of luminance within an indoor and outdoor scene. The HDR images were then processed using Photosphere software (Ward, 2010.) to produce luminance maps, where individual pixel values were compared with calibrated luminance meter readings. This comparison has shown an average luminance error of ~8% between the HDR image pixel values and luminance meter readings, when the range of luminances in the image is limited to approximately 1,500cd/m2.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Teaching introductory programming has challenged educators through the years. Although Intelligent Tutoring Systems that teach programming have been developed to try to reduce the problem, none have been developed to teach web programming. This paper describes the design and evaluation of the PHP Intelligent Tutoring System (PHP ITS) which addresses this problem. The evaluation process showed that students who used the PHP ITS showed a significant improvement in test scores
Resumo:
This paper introduces a parallel implementation of an agent-based model applied to electricity distribution grids. A fine-grained shared memory parallel implementation is presented, detailing the way the agents are grouped and executed on a multi-threaded machine, as well as the way the model is built (in a composable manner) which is an aid to the parallelisation. Current results show a medium level speedup of 2.6, but improvements are expected by incor-porating newer distributed or parallel ABM schedulers into this implementa-tion. While domain-specific, this parallel algorithm can be applied to similarly structured ABMs (directed acyclic graphs).
Resumo:
We introduce Kamouflage: a new architecture for building theft-resistant password managers. An attacker who steals a laptop or cell phone with a Kamouflage-based password manager is forced to carry out a considerable amount of online work before obtaining any user credentials. We implemented our proposal as a replacement for the built-in Firefox password manager, and provide performance measurements and the results from experiments with large real-world password sets to evaluate the feasibility and effectiveness of our approach. Kamouflage is well suited to become a standard architecture for password managers on mobile devices.
Resumo:
Uanda house is of historical importance to Queensland both in terms of its architectural design and its social history. Uanda is a low set, single story house built in 1928, located in the inner city Brisbane suburb of Wilston. Architecturally, the house has a number of features that distinguish it from the surrounding bungalow influenced inter-war houses. The house has been described as a Queensland style house with neo-Georgian influences. Historically, it is associated with the entry of women into the profession of architecture in Queensland. Uanda is the only remaining intact work of architect/draftswoman Nellie McCredie and one of a very few examples of works by pioneering women architects in Queensland. The house was entered into the Queensland Heritage Register, in 2000, after an appeal against Brisbane City Council’s refusal of an application to demolish the house was disputed in the Queensland Planning and Environment court in 1998/1999. In the court’s report, Judge Robin QC, DCJ, stated that, “The importance of preserving women's history and heritage, often previously marginalised or lost, is now accepted at government level, recognising that role models are vital for bringing new generations of women into the professions and public life.” While acknowledging women’s contribution to the profession of architecture is an important endeavour, it also has the potential to isolate women architects as separate to a mainstream history of architecture. As Julie Willis writes, it can imply an atypical, feminine style of architecture. What is the impact or potential implications of recognising heritage buildings designed by women architects? The Judge also highlights the absence of a recorded history of unique Brisbane houses and questions the authority of the heritage register. This research looks at these points of difference through a case study of the Uanda house. The paper will investigate the processes of adding the house to the heritage register, the court case and existing research on Nellie McCredie and Uanda House.
Resumo:
This paper presents a modulation and controller design method for paralleled Z-source inverter systems applicable for alternative energy sources like solar cells, fuel cells, or variablespeed wind turbines with front-end diode rectifiers. A modulation scheme is designed based on simple shoot-through principle with interleaved carriers to give enhanced ripple reduction in the system. Subsequently, a control method is proposed to equalize the amount of power injected by the inverters in the grid-connected mode and also to provide reliable supply to sensitive loads onsite in the islanding mode. The modulation and controlling methods are proposed to have modular independence so that redundancy, maintainability, and improved reliability of supply can be achieved. The performance of the proposed paralleled Z-source inverter configuration is validated with simulations carried out using Matlab/Simulink/Powersim. Moreover, a prototype is built in the laboratory to obtain the experimental verifications.
Resumo:
The present study explores reproducing the closest geometry of a high pressure ratio single stage radial-inflow turbine applied in the Sundstrans Power Systems T-100 Multipurpose Small Power Unit. The commercial software ANSYS-Vista RTD along with a built in module, BladeGen, is used to conduct a meanline design and create 3D geometry of one flow passage. Carefully examining the proposed design against the geometrical and experimental data, ANSYS-TurboGrid is applied to generate computational mesh. CFD simulations are performed with ANSYS-CFX in which three-dimensional Reynolds-Averaged Navier-Stokes equations are solved subject to appropriate boundary conditions. Results are compared with numerical and experimental data published in the literature in order to generate the exact geometry of the existing turbine and validate the numerical results against the experimental ones.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
The creation of a commercially viable and a large-scale purification process for plasmid DNA (pDNA) production requires a whole-systems continuous or semi-continuous purification strategy employing optimised stationary adsorption phase(s) without the use of expensive and toxic chemicals, avian/bovine-derived enzymes and several built-in unit processes, thus affecting overall plasmid recovery, processing time and economics. Continuous stationary phases are known to offer fast separation due to their large pore diameter making large molecule pDNA easily accessible with limited mass transfer resistance even at high flow rates. A monolithic stationary sorbent was synthesised via free radical liquid porogenic polymerisation of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with surface and pore characteristics tailored specifically for plasmid binding, retention and elution. The polymer was functionalised with an amine active group for anion-exchange purification of pDNA from cleared lysate obtained from E. coli DH5α-pUC19 pellets in RNase/protease-free process. Characterization of the resin showed a unique porous material with 70% of the pores sizes above 300 nm. The final product isolated from anion-exchange purification in only 5 min was pure and homogenous supercoiled pDNA with no gDNA, RNA and protein contamination as confirmed with DNA electrophoresis, restriction analysis and SDS page. The resin showed a maximum binding capacity of 15.2 mg/mL and this capacity persisted after several applications of the resin. This technique is cGMP compatible and commercially viable for rapid isolation of pDNA.
Resumo:
At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar to the second preimage resistance rather than on the collision resistance property of the hash functions. One of the randomized hash function modes was named the RMX hash function mode and was recommended for practical purposes. The National Institute of Standards and Technology (NIST), USA standardized a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106. In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied to forge the other randomize-hash-then-sign schemes. We point out practical limitations of the generic forgery attack on the RMX-hash-then-sign schemes. We then show that these limitations can be overcome for the RMX-hash-then-sign schemes if it is easy to find fixed points for the underlying compression functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online birthday forgery attack on this class of signatures by using a variant of Dean’s method of finding fixed point expandable messages for hash functions based on the Davies-Meyer construction. This forgery attack is also applicable to signature schemes based on the variant of RMX standardized by NIST in SP 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash-based message authentication code (HMAC).