928 resultados para Parallel processing (Electronic computers) - Research


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Urquhart, C. (editor for JUSTEIS team), Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Armstrong, A., Lonsdale, R. & Fenton, R. (2003). JUSTEIS (JISC Usage Surveys: Trends in Electronic Information Services) Strand A: survey of end users of all electronic information services (HE and FE), with Action research report. Final report 2002/2003 Cycle Four. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth with Information Automation Ltd (CIQM). Sponsorship: JISC

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: Electronic assistive technology (EAT) includes computers, environmental control systems and information technology systems and is widely considered to be an important part of present-day life. Method: Fifty-six Irish community occupational therapists completed a questionnaire on EAT. All surveyed were able to identify the benefits of EAT. Results: While respondents reported that they should be able to assess for and prescribe EATs, only a third (19) were able to do so, and half (28) had not been able to do so in the past. Community occupational therapists identified themselves as havinga role in a multidisciplinary team to assess for and prescribe EAT. Conclusion: Results suggest that it is important for occupational therapists to have up-to-date knowledge and training in assistive and computer technologies in order to respond to the occupational needs of clients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Long reach passive optical networks (LR-PONs), which integrate fibre-to-the-home with metro networks, have been the subject of intensive research in recent years and are considered one of the most promising candidates for the next generation of optical access networks. Such systems ideally have reaches greater than 100km and bit rates of at least 10Gb/s per wavelength in the downstream and upstream directions. Due to the limited equipment sharing that is possible in access networks, the laser transmitters in the terminal units, which are usually the most expensive components, must be as cheap as possible. However, the requirement for low cost is generally incompatible with the need for a transmitter chirp characteristic that is optimised for such long reaches at 10Gb/s, and hence dispersion compensation is required. In this thesis electronic dispersion compensation (EDC) techniques are employed to increase the chromatic dispersion tolerance and to enhance the system performance at the expense of moderate additional implementation complexity. In order to use such EDC in LR-PON architectures, a number of challenges associated with the burst-mode nature of the upstream link need to be overcome. In particular, the EDC must be made adaptive from one burst to the next (burst-mode EDC, or BM-EDC) in time scales on the order of tens to hundreds of nanoseconds. Burst-mode operation of EDC has received little attention to date. The main objective of this thesis is to demonstrate the feasibility of such a concept and to identify the key BM-EDC design parameters required for applications in a 10Gb/s burst-mode link. This is achieved through a combination of simulations and transmission experiments utilising off-line data processing. The research shows that burst-to-burst adaptation can in principle be implemented efficiently, opening the possibility of low overhead, adaptive EDC-enabled burst-mode systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This volume tracks the impact processing instruction has made since its conception. It provides an overview of new research trends on measuring the relative effects of processing instruction. Firstly, the authors explain processing instruction, both its main theoretical underpinnings as well as the guidelines for developing structured input practices. Secondly, they review the empirical research conducted, to date, so that readers have an overview of new research carried out on the effects of processing instruction. The authors finally reflect on the generalizability and limits of the research on processing instruction and offer future directions for processing instruction research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Processing Instruction (PI) is an approach to grammar instruction for second language learning. It derives its name from the fact that the instruction (both the explicit explanation as well as the practices) attempt to influence, alter, and/or improve the way learners process input. PI contrasts with traditional grammar instruction in many ways, most principally in its focus on input whereas traditional grammar instruction focuses on learners' output. The greatest contribution of PI to both theory and practice is the concept of "structured input", a form of comprehensible input that has been manipulated to maximize learners' benefit of exposure to input. This volume focuses on a new issue for PI, the role of technology in language learning. It examines empirically the differential effects of delivering PI in classrooms with an instructor and students interacting (with each other and with the instructor) versus on computers to students working individually. It also contributes to the growing body of research on the effects of PI on different languages as well as different linguistic items: preterite/imperfect aspectual contrast and negative informal commands in Spanish, the subjunctive of doubt and opinion in Italian, and the subjunctive of doubt in French. Further research contributions are made by comparing PI with other types of instruction, specifically, with meaning-oriented output instruction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context: Electronic bibliographic databases are a key source for professional publications about social work and community care more generally. This article describes and evaluates a method of identifying relevant articles as part of a systematic review of research evidence. Decision making about institutional and home care services for older people is used as an example. Method: Four databases (Social Science Citation Index, Medline, CINAHL, and Caredata) that abstract publications relevant to health and social services were searched systematically to identify relevant research studies. The items retrieved were appraised independently using a standard form developed for the purpose. The searches were compared in terms of sensitivity, precision, overlap between databases, and inter-rater reliability. Results: The search retrieved 525 articles, of which 276 were relevant. The four databases retrieved 55%, 41%, 19%, and 1% of the relevant articles respectively, achieving these sensitivities with precision levels of 54%, 48%, 84% and 94%. The databases retrieved 116, 73, 24 and 15 unique relevant articles respectively, showing the need to use a range of databases. Discussion: A general approach to creating a search to retrieve relevant research has been developed. The development of an international, indexed database dedicated to literature relevant to social services is a priority to enable progress in evidence-based policy and practice in social work. Editors and researchers should consider using structured abstracts in order to improve the retrieval and dissemination of research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A novel application-specific instruction set processor (ASIP) for use in the construction of modern signal processing systems is presented. This is a flexible device that can be used in the construction of array processor systems for the real-time implementation of functions such as singular-value decomposition (SVD) and QR decomposition (QRD), as well as other important matrix computations. It uses a coordinate rotation digital computer (CORDIC) module to perform arithmetic operations and several approaches are adopted to achieve high performance including pipelining of the micro-rotations, the use of parallel instructions and a dual-bus architecture. In addition, a novel method for scale factor correction is presented which only needs to be applied once at the end of the computation. This also reduces computation time and enhances performance. Methods are described which allow this processor to be used in reduced dimension (i.e., folded) array processor structures that allow tradeoffs between hardware and performance. The net result is a flexible matrix computational processing element (PE) whose functionality can be changed under program control for use in a wider range of scenarios than previous work. Details are presented of the results of a design study, which considers the application of this decomposition PE architecture in a combined SVD/QRD system and demonstrates that a combination of high performance and efficient silicon implementation are achievable. © 2005 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An application specific programmable processor (ASIP) suitable for the real-time implementation of matrix computations such as Singular Value and QR Decomposition is presented. The processor incorporates facilities for the issue of parallel instructions and a dual-bus architecture that are designed to achieve high performance. Internally, it uses a CORDIC module to perform arithmetic operations, with pipelining of the internal recursive loop exploited to multiplex the two independent micro-rotations onto a single piece of hardware. The net result is a flexible processing element whose functionality can be changed under program control, which combines high performance with efficient silicon implementation. This is illustrated through the results of a detailed silicon design study and the applications of the techniques to a combined SVD/QRD system.