999 resultados para Automatic weight assignment


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ubiquitous Computing is an emerging paradigm which facilitates user to access preferred services, wherever they are, whenever they want, and the way they need, with zero administration. While moving from one place to another the user does not need to specify and configure their surrounding environment, the system initiates necessary adaptation by itself to cope up with the changing environment. In this paper we propose a system to provide context-aware ubiquitous multimedia services, without user’s intervention. We analyze the context of the user based on weights, identify the UMMS (Ubiquitous Multimedia Service) based on the collected context information and user profile, search for the optimal server to provide the required service, then adapts the service according to user’s local environment and preferences, etc. The experiment conducted several times with different context parameters, their weights and various preferences for a user. The results are quite encouraging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Channel-aware assignment of sub-channels to users in the downlink of an OFDMA system demands extensive feedback of channel state information (CSI) to the base station. Since the feedback bandwidth is often very scarce, schemes that limit feedback are necessary. We develop a novel, low feedback splitting-based algorithm for assigning each sub-channel to its best user, i.e., the user with the highest gain for that sub-channel among all users. The key idea behind the algorithm is that, at any time, each user contends for the sub-channel on which it has the largest channel gain among the unallocated sub-channels. Unlike other existing schemes, the algorithm explicitly handles multiple access control aspects associated with the feedback of CSI. A tractable asymptotic analysis of a system with a large number of users helps design the algorithm. It yields 50% to 65% throughput gains compared to an asymptotically optimal one-bit feedback scheme, when the number of users is as small as 10 or as large as 1000. The algorithm is fast and distributed, and scales with the number of users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The blending of perfluorinated bile ester derivatives with the gelator 2,3-didecyloxyanthracene (DDOA) yields a new class of hybrid organo- and aerogels displaying a combination of optical and mechanical properties that differ from those of pure gels. Indeed, the nanofibers constituting the hybrid organogels emit polarized blue light and display dichroic near-UV absorption via the achiral DDOA molecules, thanks to their association with a chiral bile ester. Moreover, the thermal stability and the mechanical yield stress of the mixed organogels in DMSO are enhanced for blends of DDOA with the deoxycholic gelator (DC11) having a C-11 chain, as compared to the pure components' gels. When the chain length of the ester is increased to C-13 (DC13) a novel compound for aerogel formation directly in scCO(2) is obtained under the studied conditions. A mixture of this compound with DDOA is also able to gelate scCO(2) leading to novel composite aerogel materials. As revealed by SAXS measurements, the hybrid and the pure DDOA and DC13 aerogels display cell parameters that are very similar. These SAXS experiments suggest that crystallographic conditions are very favorable for the growth of hybrid molecular arrangements in which DDOA and DC13 units could be interchanged. Specific molecular interactions between two components are not always a pre-requisite condition for the formation of a hybrid nanostructured material in which the components mutually induce properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Managing heat produced by computer processors is an important issue today, especially when the size of processors is decreasing rapidly while the number of transistors in the processor is increasing rapidly. This poster describes a preliminary study of the process of adding carbon nanotubes (CNTs) to a standard silicon paste covering a CPU. Measurements were made in two rounds of tests to compare the rate of cool-down with and without CNTs present. The silicon paste acts as an interface between the CPU and the heat sink, increasing the heat transfer rate away from the CPU. To the silicon paste was added 0.05% by weight of CNTs. These were not aligned. A series of K-type thermocouples was used to measure the temperature as a function of time in the vicinity of the CPU, following its shut-off. An Omega data acquisition system was attached to the thermocouples. The CPU temperature was not measured directly because attachment of a thermocouple would have prevented its automatic shut-off A thermocouple in the paste containing the CNTs actually reached a higher temperature than the standard paste, an effect easily explained. But the rate of cooling with the CNTs was about 4.55% better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the utility of edge cracked semicircular disk (ECSD) for rapid assessment of fracture toughness using compressive loading. Continuing our earlier work on ECSD, a theoretical examination here leads to a novel way for synthesizing weight functions using two distinct form factors. The efficacy of ECSD mode-I weight function synthesized using displacement and form factor methods is demonstrated by comparing with finite element results. Theory of elasticity in conjunction with finite element method is utilized to analyze crack opening potency of ECSD under eccentric compression to explore newer configurations of ECSD for fracture testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exploiting the performance potential of GPUs requires managing the data transfers to and from them efficiently which is an error-prone and tedious task. In this paper, we develop a software coherence mechanism to fully automate all data transfers between the CPU and GPU without any assistance from the programmer. Our mechanism uses compiler analysis to identify potential stale accesses and uses a runtime to initiate transfers as necessary. This allows us to avoid redundant transfers that are exhibited by all other existing automatic memory management proposals. We integrate our automatic memory manager into the X10 compiler and runtime, and find that it not only results in smaller and simpler programs, but also eliminates redundant memory transfers. Tested on eight programs ported from the Rodinia benchmark suite it achieves (i) a 1.06x speedup over hand-tuned manual memory management, and (ii) a 1.29x speedup over another recently proposed compiler--runtime automatic memory management system. Compared to other existing runtime-only and compiler-only proposals, it also transfers 2.2x to 13.3x less data on average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study demonstrates the utility of ternary ion-pair complex formed among BINOL (1,1'-Bi-2-naphthol), a carboxylic acid and an organic base, such as, dimethylpyridine (DMAP), 1,4-diazabicyclo2.2.2]octane (DABCO), as a versatile chiral solvating agent (CSA) for the enantiodiscrimination of carboxylic acids, measurement of enantiomeric excess (ee) and the assignment of absolute configuration of hydroxy acids. The proposed mechanism of ternary complex has wider application for testing the enantiopurity owing to the fact that the binary mixture using BINOL alone does not serve as a solvating agent for their discrimination. In addition, the developed protocol has an excellent utility for the assignment of the absolute configurations of hydroxy acids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three-component chiral derivatization protocols are proposed for the assignment of the absolute configurations of chiral primary amines and chiral hydroxy acids using H-1-NMR. The protocols involve simple mixing of the ternary components in CDCl3, followed by stirring for 15 min. The spectra can be recorded directly, without invoking any separation method, unlike many other chiral derivatizing agents. The protocols permit the analysis in less than 15 min, making them convenient and effective for the assignment of the absolute configurations of primary amines and hydroxy acids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.