123 resultados para set based design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current paper suggests a new procedure for designing helmets for head impact protection for users such as motorcycle riders. According to the approach followed here, a helmet is mounted on a featureless Hybrid 3 headform that is used in assessing vehicles for compliance to the FMVSS 201 regulation in the USA for upper interior head impact safety. The requirement adopted in the latter standard, i.e. not exceeding a threshold HIC(d) limit of 1000, is applied in the present study as a likely criterion for adjudging the efficacy of helmets. An impact velocity of 6 m/s (13.5 mph) for the helmet-headform system striking a rigid target can probably be acceptable for ascertaining a helmet's effectiveness as a countermeasure for minimizing the risk of severe head injury. The proposed procedure is demonstrated with the help of a validated LS-DYNA model of a featureless Hybrid 3 headform in conjunction with a helmet model comprising an outer polypropylene shell to the inner surface of which is bonded a protective polyurethane foam padding of a given thickness. Based on simulation results of impact on a rigid surface, it appears that a minimum foam padding thickness of 40 mm is necessary for obtaining an acceptable value of HIC(d).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the design and implementation of a learning controller for the Automatic Generation Control (AGC) in power systems based on a reinforcement learning (RL) framework. In contrast to the recent RL scheme for AGC proposed by us, the present method permits handling of power system variables such as Area Control Error (ACE) and deviations from scheduled frequency and tie-line flows as continuous variables. (In the earlier scheme, these variables have to be quantized into finitely many levels). The optimal control law is arrived at in the RL framework by making use of Q-learning strategy. Since the state variables are continuous, we propose the use of Radial Basis Function (RBF) neural networks to compute the Q-values for a given input state. Since, in this application we cannot provide training data appropriate for the standard supervised learning framework, a reinforcement learning algorithm is employed to train the RBF network. We also employ a novel exploration strategy, based on a Learning Automata algorithm,for generating training samples during Q-learning. The proposed scheme, in addition to being simple to implement, inherits all the attractive features of an RL scheme such as model independent design, flexibility in control objective specification, robustness etc. Two implementations of the proposed approach are presented. Through simulation studies the attractiveness of this approach is demonstrated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Design of the required tool is a key and important parameter in the technique of friction stir welding (FSW). This is so because tool design does exert a close control over the quality of the weld. In an attempt to optimize tool design and its selection, it is essential and desirable to understand the mechanisms governing the formation of the weld. In this research study, few experiments were conducted to systematically analyze the intrinsic mechanisms governing the formation of the weld and to effectively utilize the analysis to establish a logical basis for design of the tool. For this purpose, the experiments were conducted using different geometries of the shoulder and pin of the rotating tool in such a way that only tool geometry had an intrinsic influence on formation of the weld. The results revealed that for a particular diameter of the pin there is an optimum diameter of the shoulder. Below this optimum shoulder diameter, the weld does not form while above the optimum diameter the overall symmetry of the weld is lost. Based on experimental results, a mechanism for the formation of friction stir weld is proposed. A synergism of the experimental results with the proposed mechanism is helpful in establishing the set of welding parameters for a given material.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the design and performance analysis of a detector based on suprathreshold stochastic resonance (SSR) for the detection of deterministic signals in heavy-tailed non-Gaussian noise. The detector consists of a matched filter preceded by an SSR system which acts as a preprocessor. The SSR system is composed of an array of 2-level quantizers with independent and identically distributed (i.i.d) noise added to the input of each quantizer. The standard deviation sigma of quantizer noise is chosen to maximize the detection probability for a given false alarm probability. In the case of a weak signal, the optimum sigma also minimizes the mean-square difference between the output of the quantizer array and the output of the nonlinear transformation of the locally optimum detector. The optimum sigma depends only on the probability density functions (pdfs) of input noise and quantizer noise for weak signals, and also on the signal amplitude and the false alarm probability for non-weak signals. Improvement in detector performance stems primarily from quantization and to a lesser extent from the optimization of quantizer noise. For most input noise pdfs, the performance of the SSR detector is very close to that of the optimum detector. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today's SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. In this article, we address the on-chip memory architecture exploration for DSP processors which are organized as multiple memory banks, where banks can be single/dual ported with non-uniform bank sizes. In this paper we propose two different methods for physical memory architecture exploration and identify the strengths and applicability of these methods in a systematic way. Both methods address the memory architecture exploration for a given target application by considering the application's data access characteristics and generates a set of Pareto-optimal design points that are interesting from a power, performance and VLSI area perspective. To the best of our knowledge, this is the first comprehensive work on memory space exploration at physical memory level that integrates data layout and memory exploration to address the system objectives from both hardware design and application software development perspective. Further we propose an automatic framework that explores the design space identifying 100's of Pareto-optimal design points within a few hours of running on a standard desktop configuration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several research groups have attempted to optimize photopolymerization parameters to increase the throughput of scanning based microstereolithography (MSL) systems through modified beam scanning techniques. Efforts in reducing the curing line width have been implemented through high numerical aperture (NA) optical setups. However, the intensity contour symmetry and the depth of field of focus have led to grossly non-vertical and non-uniform curing profiles. This work tries to review the photopolymerization process in a scanning based MSL system from the aspect of material functionality and optical design. The focus has been to exploit the rich potential of photoreactor scanning system in achieving desired fabrication modalities (minimum curing width, uniform depth profile, and vertical curing profile) even with a reduced NA optical setup and a single movable stage. The present study tries to manipulate to its advantage the effect of optimized lower c] (photoinitiator (PI) concentration) in reducing the minimum curing width to similar to 10-20 mu m even with a higher spot size (similar to 21.36 mu m) through a judiciously chosen ``monomer-PI'' system. Optimization on grounds of increasing E-max (maximum laser exposure energy at surface) by optimizing the scan rate provides enough time for the monomer or resin to get cured across the entire resist thickness (surface to substrate similar to 10-100 mu m), leading to uniform depth profiles along the entire scan lengths. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4750975]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Clock synchronisation is an important requirement for various applications in wireless sensor networks (WSNs). Most of the existing clock synchronisation protocols for WSNs use some hierarchical structure that introduces an extra overhead due to the dynamic nature of WSNs. Besides, it is difficult to integrate these clock synchronisation protocols with sleep scheduling scheme, which is a major technique to conserve energy. In this paper, we propose a fully distributed peer-to-peer based clock synchronisation protocol, named Distributed Clock Synchronisation Protocol (DCSP), using a novel technique of pullback for complete sensor networks. The pullback technique ensures that synchronisation phases of any pair of clocks always overlap. We have derived an exact expression for a bound on maximum synchronisation error in the DCSP protocol, and simulation study verifies that it is indeed less than the computed upper bound. Experimental study using a few TelosB motes also verifies that the pullback occurs as predicted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effectiveness of the last-level shared cache is crucial to the performance of a multi-core system. In this paper, we observe and make use of the DelinquentPC - Next-Use characteristic to improve shared cache performance. We propose a new PC-centric cache organization, NUcache, for the shared last level cache of multi-cores. NUcache logically partitions the associative ways of a cache set into MainWays and DeliWays. While all lines have access to the MainWays, only lines brought in by a subset of delinquent PCs, selected by a PC selection mechanism, are allowed to enter the DeliWays. The PC selection mechanism is an intelligent cost-benefit analysis based algorithm that utilizes Next-Use information to select the set of PCs that can maximize the hits experienced in DeliWays. Performance evaluation reveals that NUcache improves the performance over a baseline design by 9.6%, 30% and 33% respectively for dual, quad and eight core workloads comprised of SPEC benchmarks. We also show that NUcache is more effective than other well-known cache-partitioning algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this paper is to empirically evaluate a framework for designing – GEMS of SAPPhIRE as req-sol – to check if it supports design for variety and novelty. A set of observational studies is designed where three teams of two designers each, solve three different design problems in the following order: without any support, using the framework, and using a combination of the framework and a catalogue. Results from the studies reveal that both variety and novelty of the concept space increases with the use of the framework or the framework and the catalogue. However, the number of concepts and the time taken by the designers decreases with the use of the framework and, the framework and the catalogue. Based on the results and the interview sessions with the designers, an interactive framework for designing to be supported on a computer is proposed as future work.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A common-mode (CM) filter based on the LCL filter topology is proposed in this paper, which provides a parallel path for ground currents and which also restricts the magnitude of the EMI noise injected into the grid. The CM filter makes use of the components of a line to line LCL filter, which is modified to address the CM voltage with minimal additional components. This leads to a compact filtering solution. The CM voltage of an adjustable speed drive using a PWM rectifier is analyzed for this purpose. The filter design is based on the CM equivalent circuit of the drive system. The filter addresses the adverse effects of the PWM rectifier in an adjustable speed drive. Guidelines are provided on the selection of the filter components. Different variants of the filter topology are evaluated to establish the effectiveness of the proposed circuit. Experimental results based on EMI measurement on the grid side and the CM current measurement on the motor side are presented. These results validate the effectiveness of the filter.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Functions are important in designing. However, several issues hinder progress with the understanding and usage of functions: lack of a clear and overarching definition of function, lack of overall justifications for the inevitability of the multiple views of function, and scarcity of systematic attempts to relate these views with one another. To help resolve these, the objectives of this research are to propose a common definition of function that underlies the multiple views in literature and to identify and validate the views of function that are logically justified to be present in designing. Function is defined as a change intended by designers between two scenarios: before and after the introduction of the design. A framework is proposed that comprises the above definition of function and an empirically validated model of designing, extended generate, evaluate, modify, and select of state-change, and an action, part, phenomenon, input, organ, and effect model of causality (Known as GEMS of SAPPhIRE), comprising the views of activity, outcome, requirement-solution-information, and system-environment. The framework is used to identify the logically possible views of function in the context of designing and is validated by comparing these with the views of function in the literature. Describing the different views of function using the proposed framework should enable comparisons and determine relationships among the various views, leading to better understanding and usage of functions in designing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the design and development of a novel optical vehicle classifier system, which is based on interruption of laser beams, that is suitable for use in places with poor transportation infrastructure. The system can estimate the speed, axle count, wheelbase, tire diameter, and the lane of motion of a vehicle. The design of the system eliminates the need for careful optical alignment, whereas the proposed estimation strategies render the estimates insensitive to angular mounting errors and to unevenness of the road. Strategies to estimate vehicular parameters are described along with the optimization of the geometry of the system to minimize estimation errors due to quantization. The system is subsequently fabricated, and the proposed features of the system are experimentally demonstrated. The relative errors in the estimation of velocity and tire diameter are shown to be within 0.5% and to change by less than 17% for angular mounting errors up to 30 degrees. In the field, the classifier demonstrates accuracy better than 97.5% and 94%, respectively, in the estimation of the wheelbase and lane of motion and can classify vehicles with an average accuracy of over 89.5%.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.