89 resultados para software creation methodology
Resumo:
Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.
Resumo:
This paper presents a method for minimizing the sum of the square of voltage deviations by a least-square minimization technique, and thus improving the voltage profile in a given system by adjusting control variables, such as tap position of transformers, reactive power injection of VAR sources and generator excitations. The control variables and dependent variables are related by a matrix J whose elements are computed as the sensitivity matrix. Linear programming is used to calculate voltage increments that minimize transmission losses. The active and reactive power optimization sub-problems are solved separately taking advantage of the loose coupling between the two problems. The proposed algorithm is applied to IEEE 14-and 30-bus systems and numerical results are presented. The method is computationally fast and promises to be suitable for implementation in real-time dispatch centres.
Resumo:
We studied the feasibility of the measurement of Higgs pair creation at a photon linear collider. From the sensitivity to the anomalous self-coupling of the Higgs boson, the optimum gamma gamma collision energy was found to be around 270 GeV for a Higgs mass of 120 GeV/c(2). We found that large backgrounds such as gamma gamma -> W+W-, ZZ, and b (b) over barb (b) over bar can be suppressed if correct assignment of tracks to parent partons is achieved and Higgs pair events can be observed with a statistical significance of similar to 5 sigma by operating the photon linear collider for 5 years.
Resumo:
A methodology using sensitivity analysis is proposed to measure the effective permeability which includes the interaction of the resin and the reinforcement. Initially, mold-filling experiments were performed at isothermal conditions on the test specimen and the positions of the flow front were tracked with time using a flow visualization method. Following this, mold-filling experiments were simulated using a commercial software to obtain the positions of the flow front with time at the process conditions used for experiments. Several iterations were performed using different trial values of the permeability until the experimentally tracked and simulated positions of the flow front with time were matched. Finally, the value of the permeability thus obtained was validated by comparing the positions obtained by performing the experiments at different process conditions with the positions obtained by simulating the experiments. In this study, woven roving and chopped strand mats of E-class glass fiber and unsaturated polyester resin were used for the experiments. From the results, it was found that the measured permeabilities were consistent with varying process conditions. POLYM. COMPOS., 2012. (c) 2012 Society of Plastics Engineers
Resumo:
Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.
Resumo:
The experimental implementation of a quantum algorithm requires the decomposition of unitary operators. Here we treat unitary-operator decomposition as an optimization problem, and use a genetic algorithm-a global-optimization method inspired by nature's evolutionary process-for operator decomposition. We apply this method to NMR quantum information processing, and find a probabilistic way of performing universal quantum computation using global hard pulses. We also demonstrate the efficient creation of the singlet state (a special type of Bell state) directly from thermal equilibrium, using an optimum sequence of pulses. © 2012 American Physical Society.
Resumo:
This work intends to demonstrate the importance of a geometrically nonlinear cross-sectional analysis of certain composite beam-based four-bar mechanisms in predicting system dynamic characteristics. All component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (beam). Each component of the mechanism is modeled as a beam based on geometrically non-linear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and non-linear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional non-linearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the non-linear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the non-linear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict multi-body dynamic responses more quickly and accurately than would otherwise be possible. The analysis methodology can be viewed as a three-step procedure: First, the cross-sectional properties of each bar of the mechanism is determined analytically based on an asymptotic procedure, starting from Classical Laminated Shell Theory (CLST) and taking advantage of its thin strip geometry. Second, the dynamic response of the non-linear, flexible four-bar mechanism is simulated by treating each bar as a 1-D beam, discretized using finite elements, and employing energy-preserving and -decaying time integration schemes for unconditional stability. Finally, local 3-D deformations and stresses in the entire system are recovered, based on the 1-D responses predicted in the previous step. With the model, tools and procedure in place, we identify and investigate a few four-bar mechanism problems where the cross-sectional non-linearities are significant in predicting better and critical system dynamic characteristics. This is carried out by varying stacking sequences (i.e. the arrangement of ply orientations within a laminate) and material properties, and speculating on the dominating diagonal and coupling terms in the closed-form non-linear beam stiffness matrix. A numerical example is presented which illustrates the importance of 2-D cross-sectional non-linearities and the behavior of the system is also observed by using commercial software (I-DEAS + NASTRAN + ADAMS). (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The experimental implementation of a quantum algorithm requires the decomposition of unitary operators. Here we treat unitary-operator decomposition as an optimization problem, and use a genetic algorithm-a global-optimization method inspired by nature's evolutionary process-for operator decomposition. We apply this method to NMR quantum information processing, and find a probabilistic way of performing universal quantum computation using global hard pulses. We also demonstrate the efficient creation of the singlet state (a special type of Bell state) directly from thermal equilibrium, using an optimum sequence of pulses.
Resumo:
The present work proposes a new sensing methodology, which uses Fiber Bragg Gratings (FBGs) to measure in vivo the surface strain and strain rate on calf muscles while performing certain exercises. Two simple exercises, namely ankle dorsi-flexion and ankle plantar-flexion, have been considered and the strain induced on the medial head of the gastrocnemius muscle while performing these exercises has been monitored. The real time strain generated has been recorded and the results are compared with those obtained using a commercial Color Doppler Ultrasound (CDU) system. It is found that the proposed sensing methodology is promising for surface strain measurements in biomechanical applications.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
Thermoacoustic engines are energy conversion devices that convert thermal energy from a high-temperature heat source into useful work in the form of acoustic power while diverting waste heat into a cold sink; it can be used as a drive for cryocoolers and refrigerators. Though the devices are simple to fabricate, it is very challenging to design an optimized thermoacoustic primemover with better performance. The study presented here aims to optimize the thermoacoustic primemover using response surface methodology. The influence of stack position and its length, resonator length, plate thickness, and plate spacing on pressure amplitude and frequency in a thermoacoustic primemover is investigated in this study. For the desired frequency of 207 Hz, the optimized value of the above parameters suggested by the response surface methodology has been conducted experimentally, and simulations are also performed using DeltaEC. The experimental and simulation results showed similar output performance.
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
Video decoders used in emerging applications need to be flexible to handle a large variety of video formats and deliver scalable performance to handle wide variations in workloads. In this paper we propose a unified software and hardware architecture for video decoding to achieve scalable performance with flexibility. The light weight processor tiles and the reconfigurable hardware tiles in our architecture enable software and hardware implementations to co-exist, while a programmable interconnect enables dynamic interconnection of the tiles. Our process network oriented compilation flow achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi-processor, semi hardware and full hardware implementations of a video decoder. An application quality of service aware scheduler monitors and controls the operation of the entire system. We prove the concept through a prototype of the architecture on an off-the-shelf FPGA. The FPGA prototype shows a scaling in performance from QCIF to 1080p resolutions in four discrete steps. We also demonstrate that the reconfiguration time is short enough to allow migration from one configuration to the other without any frame loss.