54 resultados para Built-in test


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The creation of a commercially viable and a large-scale purification process for plasmid DNA (pDNA) production requires a whole-systems continuous or semi-continuous purification strategy employing optimised stationary adsorption phase(s) without the use of expensive and toxic chemicals, avian/bovine-derived enzymes and several built-in unit processes, thus affecting overall plasmid recovery, processing time and economics. Continuous stationary phases are known to offer fast separation due to their large pore diameter making large molecule pDNA easily accessible with limited mass transfer resistance even at high flow rates. A monolithic stationary sorbent was synthesised via free radical liquid porogenic polymerisation of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with surface and pore characteristics tailored specifically for plasmid binding, retention and elution. The polymer was functionalised with an amine active group for anion-exchange purification of pDNA from cleared lysate obtained from E. coli DH5α-pUC19 pellets in RNase/protease-free process. Characterization of the resin showed a unique porous material with 70% of the pores sizes above 300 nm. The final product isolated from anion-exchange purification in only 5 min was pure and homogenous supercoiled pDNA with no gDNA, RNA and protein contamination as confirmed with DNA electrophoresis, restriction analysis and SDS page. The resin showed a maximum binding capacity of 15.2 mg/mL and this capacity persisted after several applications of the resin. This technique is cGMP compatible and commercially viable for rapid isolation of pDNA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar to the second preimage resistance rather than on the collision resistance property of the hash functions. One of the randomized hash function modes was named the RMX hash function mode and was recommended for practical purposes. The National Institute of Standards and Technology (NIST), USA standardized a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106. In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied to forge the other randomize-hash-then-sign schemes. We point out practical limitations of the generic forgery attack on the RMX-hash-then-sign schemes. We then show that these limitations can be overcome for the RMX-hash-then-sign schemes if it is easy to find fixed points for the underlying compression functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online birthday forgery attack on this class of signatures by using a variant of Dean’s method of finding fixed point expandable messages for hash functions based on the Davies-Meyer construction. This forgery attack is also applicable to signature schemes based on the variant of RMX standardized by NIST in SP 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash-based message authentication code (HMAC).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is commonplace to use digital video cameras in robotic applications. These cameras have built-in exposure control but they do not have any knowledge of the environment, the lens being used, the important areas of the image and do not always produce optimal image exposure. Therefore, it is desirable and often necessary to control the exposure off the camera. In this paper we present a scheme for exposure control which enables the user application to determine the area of interest. The proposed scheme introduces an intermediate transparent layer between the camera and the user application which combines the information from these for optimal exposure production. We present results from indoor and outdoor scenarios using directional and fish-eye lenses showing the performance and advantages of this framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Wechsler and Stanford Binet scales are among the most commonly used tests of intelligence. In clinical practice, they often seem to be used interchangeably. This paper reports the results of two studies that compared the most recent editions of two Wechsler scales (WPPSI-III and WISC-IV) with the Stanford-Binet Fifth Edition (SB5). The participants in the first study were 36 typically developing 4-year-old children who completed the WPPSI-III and SB5 in counter-balanced order. Although correlations of composite scores ranged from r = .59 to r = .82 and were similar to those reported for earlier versions of the two instruments, more than half the sample had a score discrepancy greater than 10 points across the two instruments. In the second study, the WISC-IV and SB5 were administered to 30 children aged 12-14 years. There was a significant difference between Full Scale IQs on the two measures, with scores being higher on the WISC-IV. Differences between the two verbal scales were also significant and favoured the WISC-IV. There were moderate correlations of Full Scale IQs (r = .58) and Nonverbal IQs (r = .54) but the relationship between the two Verbal scales was not significant. For some children, notable score differences led to different categorisations of their level of intellectual ability The findings suggest that the Wechsler and Stanford Binet scales cannot be presumed to be interchangeable. The discussion focuses on how psychologists might reconcile large differences in test scores and the need for caution when interpreting and comparing test results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of capacitors for electrical energy storage actually predates the invention of the battery. Alessandro Volta is attributed with the invention of the battery in 1800, where he first describes a battery as an assembly of plates of two different materials (such as copper and zinc) placed in an alternating stack and separated by paper soaked in brine or vinegar [1]. Accordingly, this device was referred to as Volta’s pile and formed the basis of subsequent revolutionary research and discoveries on the chemical origin of electricity. Before the advent of Volta’s pile, however, eighteenth century researchers relied on the use of Leyden jars as a source of electrical energy. Built in the mid-1700s at the University of Leyden in Holland, a Leyden jar is an early capacitor consisting of a glass jar coated inside and outside with a thin layer of silver foil [2, 3]. With the outer foil being grounded, the inner foil could be charged with an electrostatic generator, or a source of static electricity, and could produce a strong electrical discharge from a small and comparatively simple device.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals. The study relates these to the demands of the writing component of IELTS, which is increasingly used for exit testing. The number of international and local students whose first language is not English and who are studying in English-medium universities has increased significantly in the past decade. Many of these students aim to start working in the country they studied in; however, some employers have suggested that graduates seeking employment have insufficient language skills. This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals (our two case study professions). It relates these to the demands of the writing component of IELTS, which is increasingly used for exit or professional entry testing, although not expressly designed for this purpose. Data include interviews with final year students, lecturers, employers and new graduates in their first few years in the workforce, as well as professional board members. Employers also reviewed both final year assignments, as well as IELTS writing samples at different levels. Most stakeholders agreed that graduates entering the workforce are underprepared for the writing demands in their professions. When compared with the university writing tasks, the workplace writing expected of new graduates was perceived as different in terms of genre, the tailoring of a text for a specific audience, and processes of review and editing involved. Stakeholders expressed a range of views on the suitability of the use of academic proficiency tests (such as IELTS) as university exit tests and for entry into the professions. With regard to IELTS, while some saw the relevance of the two writing tasks, particularly in relation to academic writing, others questioned the extent to which two timed tasks representing limited genres could elicit a representative sample of the professional writing required, particularly in the context of engineering. The findings are discussed in relation to different test purposes, the intersection between academic and specific purpose testing and the role of domain experts in test validation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-agent systems implicate a high degree of concurrency at both the Inter- and Intra-Agent levels. Scalable, fault tolerant, Agent Grooming Environment (SAGE), the second generation, FIPA compliant MAS requires a built in mechanism to achieve both the Inter- and Intra-Agent concurrency. This paper dilates upon an attempt to provide a reliable, efficient and light-weight solution to provide intra-agent concurrency with-in the internal agent architecture of SAGE. It addresses the issues related to using the JAVA threading model to provide this level of concurrency to the agent and provides an alternative approach that is based on an eventdriven, concurrent and user-scalable multi-tasking model for the agent's internal model. The findings of this paper show that our proposed approach is suitable for providing an efficient and lightweight concurrent task model for SA GE and considerably outweighs the performance of multithreaded tasking model based on JAVA in terms of throughput and efficiency. This has been illustrated using the practical implementation and evaluation of both models. © 2004 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-agent systems (MAS) advocate an agent-based approach to software engineering based on decomposing problems in terms of decentralized, autonomous agents that can engage in flexible, high-level interactions. This chapter introduces scalable fault tolerant agent grooming environment (SAGE), a second-generation Foundation for Intelligent Physical Agents (FIPA)-compliant multi-agent system developed at NIIT-Comtec, which provides an environment for creating distributed, intelligent, and autonomous entities that are encapsulated as agents. The chapter focuses on the highlight of SAGE, which is its decentralized fault-tolerant architecture that can be used to develop applications in a number of areas such as e-health, e-government, and e-science. In addition, SAGE architecture provides tools for runtime agent management, directory facilitation, monitoring, and editing messages exchange between agents. SAGE also provides a built-in mechanism to program agent behavior and their capabilities with the help of its autonomous agent architecture, which is the other major highlight of this chapter. The authors believe that the market for agent-based applications is growing rapidly, and SAGE can play a crucial role for future intelligent applications development. © 2007, IGI Global.