878 resultados para computer-assisted system


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a corpus-based descriptive analysis of the most prevalent transfer effects and connected speech processes observed in a comparison of 11 Vietnamese English speakers (6 females, 5 males) and 12 Australian English speakers (6 males, 6 females) over 24 grammatical paraphrase items. The phonetic processes are segmentally labelled in terms of IPA diacritic features using the EMU speech database system with the aim of labelling departures from native-speaker pronunciation. An analysis of prosodic features was made using ToBI framework. The results show many phonetic and prosodic processes which make non-native speakers’ speech distinct from native ones. The corpusbased methodology of analysing foreign accent may have implications for the evaluation of non-native accent, accented speech recognition and computer assisted pronunciation- learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design. © 2011 Wessa et al.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to support automatic evaluation of summaries. However, their performance is not satisfactory for assessing summary writings. To improve the performance, this paper proposes an ensemble approach that integrates LSA and n-gram co-occurrence. As a result, the proposed ensemble approach is able to achieve high accuracy and improve the performance quite substantially compared with current techniques. A summary assessment system based on the proposed approach has also been developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Design of casting entails the knowledge of various interacting factors that are unique to casting process, and, quite often, product designers do not have the required foundry-specific knowledge. Casting designers normally have to liaise with casting experts in order to ensure the product designed is castable and the optimum casting method is selected. This two-way communication results in long design lead times, and lack of it can easily lead to incorrect casting design. A computer-based system at the discretion of a design engineer can, however, alleviate this problem and enhance the prospect of casting design for manufacture. This paper proposes a knowledge-based expert system approach to assist casting product designers in selecting the most suitable casting process for specified casting design requirements, during the design phase of product manufacture. A prototype expert system has been developed, based on production rules knowledge representation technique. The proposed system consists of a number of autonomous but interconnected levels, each dealing with a specific group of factors, namely, casting alloy, shape and complexity parameters, accuracy requirements and comparative costs, based on production quantity. The user interface has been so designed to allow the user to have a clear view of how casting design parameters affect the selection of various casting processes at each level; if necessary, the appropriate design changes can be made to facilitate the castability of the product being designed, or to suit the design to a preferred casting method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we consider a computer information system and a way to realize the security of the data in it with digital watermarking. A technique for spread spectrum watermarking is presented and its realization with MathLAB 6.5 is shown.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The given work is devoted to development of the computer-aided system of semantic text analysis of a technical specification. The purpose of this work is to increase efficiency of software engineering based on automation of semantic text analysis of a technical specification. In work it is offered and investigated a technique of the text analysis of a technical specification is submitted, the expanded fuzzy attribute grammar of a technical specification, intended for formalization of limited Russian language is constructed with the purpose of analysis of offers of text of a technical specification, style features of the technical specification as class of documents are considered, recommendations on preparation of text of a technical specification for the automated processing are formulated. The computer-aided system of semantic text analysis of a technical specification is considered. This system consist of the following subsystems: preliminary text processing, the syntactic and semantic analysis and construction of software models, storage of documents and interface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Studies of framing in the EU political system are still a rarity and they suffer from a lack of systematic empirical analysis. Addressing this gap, we ask if institutional and policy contexts intertwined with the strategic side of framing can explain the number and types of frames employed by different stakeholders. We use a computer-assisted manual content analysis and develop a fourfold typology of frames to study the frames that were prevalent in the debates on four EU policy proposals within financial market regulation and environmental policy at the EU level and in Germany, Sweden, the Netherlands and the United Kingdom. The main empirical finding is that both contexts and strategies exert a significant impact on the number and types of frames in EU policy debates. In conceptual terms, the article contributes to developing more fine-grained tools for studying frames and their underlying dimensions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A computer code system for simulation and estimation of branching processes is proposed. Using the system, samples for some models with or without migration are generated. Over these samples we compare some properties of various estimators.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.1.1, G.1.2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We quantified pigment biomarkers by high performance liquid chromatography (HPLC) to obtain a broad taxonomic classification of microphytobenthos (MPB) (i.e. identification of dominant taxa). Three replicate sediment cores were collected at 0, 50 and 100 m along transects 5-9 in Heron Reef lagoon (n=15) (Fig. 1). Transects 1-4 could not be processed because the means to have the samples analysed by HPLC were not available at the time of field data collection. Cores were stored frozen and scrapes taken from the top of each one and placed in cryovials immersed in dry ice. Samples were sent to the laboratory (CSIRO Marine and Atmospheric Research, Hobart, Australia) where pigments were extracted with 100% acetone during fifteen hours at 4°C after vortex mixing (30 seconds) and sonication (15 minutes). Samples were then centrifuged and filtered prior to the analysis of pigment composition with a Waters - Alliance HPLC system equipped with a photo-diode array detector. Pigments were separated using a Zorbax Eclipse XDB-C8 stainless steel 150 mm x 4.6 mm ID column with 3.5 µm particle size (Agilent Technologies) and a binary gradient system with an elevated column temperature following a modified version of the Van Heukelem and Thomas (2001) method. The separated pigments were detected at 436 nm and identified against standard spectra using Waters Empower software. Standards for HPLC system calibration were obtained from Sigma (USA) and DHI (Denmark).