774 resultados para GPGPU Parallel Computing
Optimal Methodology for Synchronized Scheduling of Parallel Station Assembly with Air Transportation
Resumo:
We present an optimal methodology for synchronized scheduling of production assembly with air transportation to achieve accurate delivery with minimized cost in consumer electronics supply chain (CESC). This problem was motivated by a major PC manufacturer in consumer electronics industry, where it is required to schedule the delivery requirements to meet the customer needs in different parts of South East Asia. The overall problem is decomposed into two sub-problems which consist of an air transportation allocation problem and an assembly scheduling problem. The air transportation allocation problem is formulated as a Linear Programming Problem with earliness tardiness penalties for job orders. For the assembly scheduling problem, it is basically required to sequence the job orders on the assembly stations to minimize their waiting times before they are shipped by flights to their destinations. Hence the second sub-problem is modelled as a scheduling problem with earliness penalties. The earliness penalties are assumed to be independent of the job orders.
Resumo:
Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.
Resumo:
The accuracy of a 3D reconstruction using laser scanners is significantly determined by the detection of the laser stripe. Since the energy pattern of such a stripe corresponds to a Gaussian profile, it makes sense to detect the point of maximum light intensity (or peak) by computing the zero-crossing point of the first derivative of such Gaussian profile. However, because noise is present in every physical process, such as electronic image formation, it is not sensitive to perform the derivative of the image of the stripe in almost any situation, unless a previous filtering stage is done. Considering that stripe scanning is an inherently row-parallel process, every row of a given image must be processed independently in order to compute its corresponding peak position in the row. This paper reports on the use of digital filtering techniques in order to cope with the scanning of different surfaces with different optical properties and different noise levels, leading to the proposal of a more accurate numerical peak detector, even at very low signal-to-noise ratios
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
Guide for computing in the School of Mathematics. Intended for new staff and PG students. Originally written by Anton Prowse from a number of earlier documents.
Resumo:
Reference List for UK Computing Law
Resumo:
Group Poster for UK Computing Law
Resumo:
Zip file containing source code and database dump for the resource
Resumo:
Collection of poster, reference list and resource source and database dump
Resumo:
Social Computing Data Repository hosts data from a collection of many different social media sites, most of which have blogging capacity. Some of the prominent social media sites included in this repository are BlogCatalog, Twitter, MyBlogLog, Digg, StumbleUpon, del.icio.us, MySpace, LiveJournal, The Unofficial Apple Weblog (TUAW), Reddit, etc. The repository contains various facets of blog data including blog site metadata like, user defined tags, predefined categories, blog site description; blog post level metadata like, user defined tags, date and time of posting; blog posts; blog post mood (which is defined as the blogger's emotions when (s)he wrote the blog post); blogger name; blog post comments; and blogger social network.
Resumo:
Actually this is a timeline of Learning Technology but has all teh important dates in it
Resumo:
Background reading for coursework to prepare a technical report as part of the orientation phase. These items are business documents (i.e. grey literature) which might be read as a prelude or complement to finding information in peer reviewed academic publications. grey literature links and articles to be used in preparation of technical report. See also overview guidance document for this assignment http://www.edshare.soton.ac.uk/8017/
Resumo:
By antipirates