870 resultados para parallel corpora
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
The transition of internally heated inclined plane parallel shear flows is examined numerically for the case of finite values of the Prandtl number Pr. We show that as the strength of the homogeneously distributed heat source is increased the basic flow loses stability to two-dimensional perturbations of the transverse roll type in a Hopf bifurcation for the vertical orientation of the fluid layer, whereas perturbations of the longitudinal roll type are most dangerous for a wide range of the value of the angle of inclination. In the case of the horizontal inclination transverse roll and longitudinal roll perturbations share the responsibility for the prime instability. Following the linear stability analysis for the general inclination of the fluid layer our attention is focused on a numerical study of the finite amplitude secondary travelling-wave solutions (TW) that develop from the perturbations of the transverse roll type for the vertical inclination of the fluid layer. The stability of the secondary TW against three-dimensional perturbations is also examined and our study shows that for Pr=0.71 the secondary instability sets in as a quasi-periodic mode, while for Pr=7 it is phase-locked to the secondary TW. The present study complements and extends the recent study by Nagata and Generalis (2002) in the case of vertical inclination for Pr=0.
Resumo:
Language learners ask a variety of questions about words and their meanings and uses: “What does X mean? What is the word for X in English? Can you say X? When do you use X and when do you use Y (e.g. synonyms, grammatical structures, prepositional choices, variant phrases, etc)?”
Resumo:
Creating an appropriate translation often means adapting the target text (TT) to the text-typological conventions of the target culture. Such knowledge can be gained by a comparative analysis of parallel texts, i.e. L2 and L1 texts of equal informativity which have been produced in similar communicative situations. Some problems related to (cross-cultural) text-typological conventions and the role of parallel texts for describing translation strategies are described, as well as implications for teaching translation. The discussion is supported with examples of parallel texts that are representative of various genres, such as instruction manuals, international treaties and tourist brochures.
Resumo:
Contrast masking from parallel grating surrounds (doughnuts) and superimposed orthogonal masks have different characteristics. However, it is not known whether the saturation of the underlying suppression that has been found for parallel doughnut masks depends on (i) relative mask and target orientation, (ii) stimulus eccentricity or (iii) surround suppression. We measured contrast-masking functions for target patches of grating in the fovea and in the periphery for cross-oriented superimposed and doughnut masks and parallel doughnut masks. When suppression was evident, the factor that determined whether it accelerated or saturated was whether the mask stimulus was crossed or parallel. There are at least two interpretations of the asymptotic behaviour of the parallel surround mask. (1) Suppression arises from pathways that saturate with (mask) contrast. (2) The target is processed by a mechanism that is subject to surround suppression at low target contrasts, but a less sensitive mechanism that is immune from surround suppression ‘breaks through’ at higher target contrasts. If the mask can be made less potent, then masking functions should shift downwards, and sideways for the two accounts, respectively. We manipulated the potency of the mask by varying the size of the hole in a parallel doughnut mask. The results provided strong evidence for the first account but not the second. On the view that response compression becomes more severe progressing up the visual pathway, our results suggest that superimposed cross-orientation suppression precedes orientation tuned surround suppression. These results also reveal a previously unrecognized similarity between surround suppression and crowding (Pelli, Palomares, & Majaj, 2004).
Resumo:
Corpora amylacea (CA) are spherical or ovoid bodies 50-50 microns in diameter. They have been described in normal elderly brain as well as in a number of neurodegenerative disorders. In this study, the incidence of CA in the optic nerves of Alzheimer's disease (AD) patients was compared with normal elderly controls. Samples of optic nerves (MRC Brain Bank, Institute of Psychiatry) were taken from 12 AD patients (age range 69-94 years) and 18 controls (43-82 years). Optic nerves were fixed in 2% buffered glutaraldehyde, post-fixed in osmium tetroxide, embedded in epoxy resin and then sectioned to a thickness of 2 microns. Sections were stained with toluidine blue. CA were present in all of the optic nerves examined. In addition, a number of similarly stained but more irregularly shaped bodies were present. Fewer CA were found in the optic nerves of AD patients compared with controls. By contrast, the number or irregularly shaped bodies was increased in AD. In AD, there may be a preferential decline in the large diameter fibres which may mediate the M-cell pathway. Hence, the decline in the incidence of CA in AD may be associated with a reduction in these fibres. It is also possible that the irregualrly shaped bodies are a degeneration product of the CA.
Resumo:
This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
The stability of internally heated inclined plane parallel shear flows is examined numerically for the case of finite value of the Prandtl number, Pr. The transition in a vertical channel has already been studied for 0≤Pr≤100 with or without the application of an external pressure gradient, where the secondary flow takes the form of travelling waves (TWs) that are spanwise-independent (see works of Nagata and Generalis). In this work, in contrast to work already reported (J. Heat Trans. T. ASME 124 (2002) 635-642), we examine transition where the secondary flow takes the form of longitudinal rolls (LRs), which are independent of the steamwise direction, for Pr=7 and for a specific value of the angle of inclination of the fluid layer without the application of an external pressure gradient. We find possible bifurcation points of the secondary flow by performing a linear stability analysis that determines the neutral curve, where the basic flow, which can have two inflection points, loses stability. The linear stability of the secondary flow against three-dimensional perturbations is also examined numerically for the same value of the angle of inclination by employing Floquet theory. We identify possible bifurcation points for the tertiary flow and show that the bifurcation can be either monotone or oscillatory. © 2003 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha–Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.