843 resultados para SCHEDULING OF GRID TASKS
Resumo:
Seismic Numerical Modeling is one of bases of the Exploratory Seismology and Academic Seismology, also is a research field in great demand. Essence of seismic numerical modeling is to assume that structure and parameters of the underground media model are known, simulate the wave-field and calculate the numerical seismic record that should be observed. Seismic numerical modeling is not only a means to know the seismic wave-field in complex inhomogeneous media, but also a test to the application effect by all kinds of methods. There are many seismic numerical modeling methods, each method has its own merits and drawbacks. During the forward modeling, the computation precision and the efficiency are two pivotal questions to evaluate the validity and superiority of the method. The target of my dissertation is to find a new method to possibly improve the computation precision and efficiency, and apply the new forward method to modeling the wave-field in the complex inhomogeneous media. Convolutional Forsyte polynomial differentiator (CFPD) approach developed in this dissertation is robust and efficient, it shares some of the advantages of the high precision of generalized orthogonal polynomial and the high speed of the short operator finite-difference. By adjusting the operator length and optimizing the operator coefficient, the method can involve whole and local information of the wave-field. One of main tasks of the dissertation is to develop a creative, generalized and high precision method. The author introduce convolutional Forsyte polynomial differentiator to calculate the spatial derivative of seismic wave equation, and apply the time staggered grid finite-difference which can better meet the high precision of the convolutional differentiator to substitute the conventional finite-difference to calculate the time derivative of seismic wave equation, then creating a new forward method to modeling the wave-field in complex inhomogeneous media. Comparing with Fourier pseudo-spectral method, Chebyshev pseudo-spectral method, staggered- grid finite difference method and finite element method, convolutional Forsyte polynomial differentiator (CFPD) method has many advantages: 1. Comparing with Fourier pseudo-spectral method. Fourier pseudo-spectral method (FPS) is a local operator, its results have Gibbs effects when the media parameters change, then arose great errors. Therefore, Fourier pseudo-spectral method can not deal with special complex and random heterogeneous media. But convolutional Forsyte polynomial differentiator method can cover global and local information. So for complex inhomogeneous media, CFPD is more efficient. 2. Comparing with staggered-grid high-order finite-difference method, CFPD takes less dots than FD at single wave length, and the number does not increase with the widening of the studying area. 3. Comparing with Chebyshev pseudo-spectral method (CPS). The calculation region of Chebyshev pseudo-spectral method is fixed in , under the condition of unchangeable precision, the augmentation of calculation is unacceptable. Thus Chebyshev pseudo-spectral method is inapplicable to large area. CFPD method is more applicable to large area. 4. Comparing with finite element method (FE), CFPD can use lager grids. The other task of this dissertation is to study 2.5 dimension (2.5D) seismic wave-field. The author reviews the development and present situation of 2.5D problem, expatiates the essentiality of studying the 2.5D problem, apply CFPD method to simulate the seismic wave-field in 2.5D inhomogeneous media. The results indicate that 2.5D numerical modeling is efficient to simulate one of the sections of 3D media, 2.5D calculation is much less time-consuming than 3D calculation, and the wave dispersion of 2.5D modeling is obviously less than that of 3D modeling. Question on applying time staggered-grid convolutional differentiator based on CFPD to modeling 2.5D complex inhomogeneous media was not studied by any geophysicists before, it is a fire-new creation absolutely. The theory and practices prove that the new method can efficiently model the seismic wave-field in complex media. Proposing and developing this new method can provide more choices to study the seismic wave-field modeling, seismic wave migration, seismic inversion, and seismic wave imaging.
Resumo:
In this report, I discuss the use of vision to support concrete, everyday activity. I will argue that a variety of interesting tasks can be solved using simple and inexpensive vision systems. I will provide a number of working examples in the form of a state-of-the-art mobile robot, Polly, which uses vision to give primitive tours of the seventh floor of the MIT AI Laboratory. By current standards, the robot has a broad behavioral repertoire and is both simple and inexpensive (the complete robot was built for less than $20,000 using commercial board-level components). The approach I will use will be to treat the structure of the agent's activity---its task and environment---as positive resources for the vision system designer. By performing a careful analysis of task and environment, the designer can determine a broad space of mechanisms which can perform the desired activity. My principal thesis is that for a broad range of activities, the space of applicable mechanisms will be broad enough to include a number mechanisms which are simple and economical. The simplest mechanisms that solve a given problem will typically be quite specialized to that problem. One thus worries that building simple vision systems will be require a great deal of {it ad-hoc} engineering that cannot be transferred to other problems. My second thesis is that specialized systems can be analyzed and understood in a principled manner, one that allows general lessons to be extracted from specialized systems. I will present a general approach to analyzing specialization through the use of transformations that provably improve performance. By demonstrating a sequence of transformations that derive a specialized system from a more general one, we can summarize the specialization of the former in a compact form that makes explicit the additional assumptions that it makes about its environment. The summary can be used to predict the performance of the system in novel environments. Individual transformations can be recycled in the design of future systems.
Resumo:
Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.
Resumo:
The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.
Resumo:
The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.
Resumo:
The percentage of subjects recalling each unit in a list or prose passage is considered as a dependent measure. When the same units are recalled in different tasks, processing is assumed to be the same; when different units are recalled, processing is assumed to be different. Two collections of memory tasks are presented, one for lists and one for prose. The relations found in these two collections are supported by an extensive reanalysis of the existing prose memory literature. The same set of words were learned by 13 different groups of subjects under 13 different conditions. Included were intentional free-recall tasks, incidental free recall following lexical decision, and incidental free recall following ratings of orthographic distinctiveness and emotionality. Although the nine free-recall tasks varied widely with regard to the amount of recall, the relative probability of recall for the words was very similar among the tasks. Imagery encoding and recognition produced relative probabilities of recall that were different from each other and from the free-recall tasks. Similar results were obtained with a prose passage. A story was learned by 13 different groups of subjects under 13 different conditions. Eight free-recall tasks, which varied with respect to incidental or intentional learning, retention interval, and the age of the subjects, produced similar relative probabilities of recall, whereas recognition and prompted recall produced relative probabilities of recall that were different from each other and from the free-recall tasks. A review of the prose literature was undertaken to test the generality of these results. Analysis of variance is the most common statistical procedure in this literature. If the relative probability of recall of units varied across conditions, a units by condition interaction would be expected. For the 12 studies that manipulated retention interval, an average of 21% of the variance was accounted for by the main effect of retention interval, 17% by the main effect of units, and only 2% by the retention interval by units interaction. Similarly, for the 12 studies that varied the age of the subjects, 6% of the variance was accounted for by the main effect of age, 32% by the main effect of units, and only 1% by the interaction of age by units.(ABSTRACT TRUNCATED AT 400 WORDS)
Resumo:
Noise is one of the main factors degrading the quality of original multichannel remote sensing data and its presence influences classification efficiency, object detection, etc. Thus, pre-filtering is often used to remove noise and improve the solving of final tasks of multichannel remote sensing. Recent studies indicate that a classical model of additive noise is not adequate enough for images formed by modern multichannel sensors operating in visible and infrared bands. However, this fact is often ignored by researchers designing noise removal methods and algorithms. Because of this, we focus on the classification of multichannel remote sensing images in the case of signal-dependent noise present in component images. Three approaches to filtering of multichannel images for the considered noise model are analysed, all based on discrete cosine transform in blocks. The study is carried out not only in terms of conventional efficiency metrics used in filtering (MSE) but also in terms of multichannel data classification accuracy (probability of correct classification, confusion matrix). The proposed classification system combines the pre-processing stage where a DCT-based filter processes the blocks of the multichannel remote sensing image and the classification stage. Two modern classifiers are employed, radial basis function neural network and support vector machines. Simulations are carried out for three-channel image of Landsat TM sensor. Different cases of learning are considered: using noise-free samples of the test multichannel image, the noisy multichannel image and the pre-filtered one. It is shown that the use of the pre-filtered image for training produces better classification in comparison to the case of learning for the noisy image. It is demonstrated that the best results for both groups of quantitative criteria are provided if a proposed 3D discrete cosine transform filter equipped by variance stabilizing transform is applied. The classification results obtained for data pre-filtered in different ways are in agreement for both considered classifiers. Comparison of classifier performance is carried out as well. The radial basis neural network classifier is less sensitive to noise in original images, but after pre-filtering the performance of both classifiers is approximately the same.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
Several studies have reported imitative deficits in autism spectrum disorder (ASD). However, it is still debated if imitative deficits are specific to ASD or shared with clinical groups with similar mental impairment and motor difficulties. We investigated whether imitative tasks can be used to discriminate ASD children from typically developing children (TD) and children with general developmental delay (GDD). We applied discriminant function analyses to the performance of these groups on three imitation tasks and tests of dexterity, motor planning, verbal skills, theory of mind (ToM). Analyses revealed two significant dimensions. The first represented impairment of dexterity and verbal ability, and discriminated TD from GDD children. Once these differences were accounted for, differences in ToM and the three imitation tasks accounted for a significant proportion of the remaining intergroup variance and discriminated the ASD group from other groups. Further analyses revealed that inclusion of imitative tasks increased the specificity and sensitivity of ASD classification and that imitative tasks considered alone were able to reliably discriminate ASD, TD and GDD. The results suggest that imitation and theory of mind impairment in autism may stem from a common domain of origin separate from general cognitive and motor skill.
Resumo:
Introduction: Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also to phonological and literacy development in children. Methods: A battery of behavioural tasks was devised to explore relations between musical metrical perception, auditory perception of amplitude envelope structure, phonological awareness (PA) and reading in a sample of 64 typically-developing children and children with developmental dyslexia. Results: We show that individual differences in the perception of amplitude envelope rise time are linked to musical metrical sensitivity, and that musical metrical sensitivity predicts PA and reading development, accounting for over 60% of variance in reading along with age and I.Q. Even the simplest metrical task, based on a duple metrical structure, was performed significantly more poorly by the children with dyslexia. Conclusions: The accurate perception of metrical structure may be critical for phonological development and consequently for the development of literacy. Difficulties in metrical processing are associated with basic auditory rise time processing difficulties, suggesting a primary sensory impairment in developmental dyslexia in tracking the lower-frequency modulations in the speech envelope. © 2010 Elsevier.
Resumo:
The sensory abnormalities associated with disorders such as dyslexia, autism and schizophrenia have often been attributed to a generalized deficit in the visual magnocellular-dorsal stream and its auditory homologue. To probe magnocellular function, various psychophysical tasks are often employed that require the processing of rapidly changing stimuli. But is performance on these several tasks supported by a common substrate? To answer this question, we tested a cohort of 1060 individuals on four 'magnocellular tasks': detection of low-spatial-frequency gratings reversing in contrast at a high temporal frequency (so-called frequency-doubled gratings); detection of pulsed low-spatial-frequency gratings on a steady luminance pedestal; detection of coherent motion; and auditory discrimination of temporal order. Although all tasks showed test-retest reliability, only one pair shared more than 4 per cent of variance. Correlations within the set of 'magnocellular tasks' were similar to the correlations between those tasks and a 'non-magnocellular task', and there was little consistency between 'magnocellular deficit' groups comprising individuals with the lowest sensitivity for each task. Our results suggest that different 'magnocellular tasks' reflect different sources of variance, and thus are not general measures of 'magnocellular function'.
Resumo:
Task dataflow languages simplify the specification of parallel programs by dynamically detecting and enforcing dependencies between tasks. These languages are, however, often restricted to a single level of parallelism. This language design is reflected in the runtime system, where a master thread explicitly generates a task graph and worker threads execute ready tasks and wake-up their dependents. Such an approach is incompatible with state-of-the-art schedulers such as the Cilk scheduler, that minimize the creation of idle tasks (work-first principle) and place all task creation and scheduling off the critical path. This paper proposes an extension to the Cilk scheduler in order to reconcile task dependencies with the work-first principle. We discuss the impact of task dependencies on the properties of the Cilk scheduler. Furthermore, we propose a low-overhead ticket-based technique for dependency tracking and enforcement at the object level. Our scheduler also supports renaming of objects in order to increase task-level parallelism. Renaming is implemented using versioned objects, a new type of hyper object. Experimental evaluation shows that the unified scheduler is as efficient as the Cilk scheduler when tasks have no dependencies. Moreover, the unified scheduler is more efficient than SMPSS, a particular implementation of a task dataflow language.
Resumo:
Preparing social work students for the demands of changing social environments and to promote student mobility and interest in overseas employment opportunities have resulted in an increasing demand for international social work placements. The literature describes numerous examples of social work programmes that offer a wide variety of international placements. However, research about the actual benefit of undertaking an overseas placement is scant with limited empirical evidence on the profile of students participating, their experience of the tasks offered, the supervisory practice and the outcomes for students' professional learning and career. This study contributes to the existing body of literature by exploring the relevance of international field placements for students and is unique in that it draws its sample from students who have graduated so provides a distinctive perspective in which to compare their international placement with their other placement/s as well as evaluating what were the benefits and drawbacks for them in terms of their careers, employment opportunities and current professional practice.
Resumo:
In this paper a model of grid computation that supports both heterogeneity and dynamicity is presented. The model presupposes that user sites contain software components awaiting execution on the grid. User sites and grid sites interact by means of managers which control dynamic behaviour. The orchestration language ORC [9,10] offers an abstract means of specifying operations for resource acquisition and execution monitoring while allowing for the possibility of non-responsive hardware. It is demonstrated that ORC is sufficiently expressive to model typical kinds of grid interactions.
Resumo:
A PSS/E 32 model of a real section of the Northern Ireland electrical grid was dynamically controlled with Python 2.5. In this manner data from a proposed wide area monitoring system was simulated. The area is of interest as it is a weakly coupled distribution grid with significant distributed generation. The data was used to create an optimization and protection metric that reflected reactive power flow, voltage profile, thermal overload and voltage excursions. Step changes in the metric were introduced upon the operation of special protection systems and voltage excursions. A wide variety of grid conditions were simulated while tap changer positions and switched capacitor banks were iterated through; with the most desirable state returning the lowest optimization and protection metric. The optimized metric was compared against the metric generated from the standard system state returned by PSS/E. Various grid scenarios were explored involving an intact network and compromised networks (line loss) under summer maximum, summer minimum and winter maximum conditions. In each instance the output from the installed distributed generation is varied between 0 MW and 80 MW (120% of installed capacity). It is shown that in grid models the triggering of special protection systems is delayed by between 1 MW and 6 MW (1.5% to 9% of capacity), with 3.5 MW being the average. The optimization and protection metric gives a quantitative value for system health and demonstrates the potential efficacy of wide area monitoring for protection and control.