168 resultados para Mathematical techniques
Resumo:
This paper proposes a novel experimental test procedure to estimate the reliability of structural dynamical systems under excitations specified via random process models. The samples of random excitations to be used in the test are modified by the addition of an artificial control force. An unbiased estimator for the reliability is derived based on measured ensemble of responses under these modified inputs based on the tenets of Girsanov transformation. The control force is selected so as to reduce the sampling variance of the estimator. The study observes that an acceptable choice for the control force can be made solely based on experimental techniques and the estimator for the reliability can be deduced without taking recourse to mathematical model for the structure under study. This permits the proposed procedure to be applied in the experimental study of time-variant reliability of complex structural systems that are difficult to model mathematically. Illustrative example consists of a multi-axes shake table study on bending-torsion coupled, geometrically non-linear, five-storey frame under uni/bi-axial, non-stationary, random base excitation. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.
Resumo:
Carbon Fiber Reinforced Plastic composites were fabricated through vacuum resin infusion technology by adopting two different processing conditions, viz., vacuum only in the first and vacuum plus external pressure in the next, in order to generate two levels of void-bearing samples. They were relatively graded as higher and lower void-bearing ones, respectively. Microscopy and C-scan techniques were utilized to describe the presence of voids arising from the two different processing parameters. Further, to determine the influence of voids on impact behavior, the fabricated +45 degrees/90 degrees/-45 degrees composite samples were subjected to low velocity impacts. The tests show impact properties like peak load and energy to peak load registering higher values for the lower void-bearing case where as the total energy, energy for propagation and ductility indexes were higher for the higher void-bearing ones. Fractographic analysis showed that higher void-bearing samples display lower number of separation of layers in the laminate. These and other results are described and discussed in this report.
Resumo:
The problem of modelling the transient response of an elastic-perfectly-plastic cantilever beam, carrying an impulsively loaded tip mass, is,often referred to as the Parkes cantilever problem 25]; The permanent deformation of a cantilever struck transversely at its tip, Proc. R. Soc. A., 288, pp. 462). This paradigm for classical modelling of projectile impact on structures is re-visited and updated using the mesh-free method, smoothed particle hydrodynamics (SPH). The purpose of this study is to investigate further the behaviour of cantilever beams subjected to projectile impact at its tip, by considering especially physically real effects such as plastic shearing close to the projectile, shear deformation, and the variation of the shear strain along the length and across the thickness of the beam. Finally, going beyond macroscopic structural plasticity, a strategy to incorporate physical discontinuity (due to crack formation) in SPH discretization is discussed and explored in the context of tip-severance of the cantilever beam. Consequently, the proposed scheme illustrates the potency for a more refined treatment of penetration mechanics, paramount in the exploration of structural response under ballistic loading. The objective is to contribute to formulating a computational modelling framework within which transient dynamic plasticity and even penetration/failure phenomena for a range of materials, structures and impact conditions can be explored ab initio, this being essential for arriving at suitable tools for the design of armour systems. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Understanding the growth behavior of microorganisms using modeling and optimization techniques is an active area of research in the fields of biochemical engineering and systems biology. In this paper, we propose a general modeling framework, based on Monad model, to model the growth of microorganisms. Utilizing the general framework, we formulate an optimal control problem with the objective of maximizing a long-term cellular goal and solve it analytically under various constraints for the growth of microorganisms in a two substrate batch environment. We investigate the relation between long term and short term cellular goals and show that the objective of maximizing cellular concentration at a fixed final time is equivalent to maximization of instantaneous growth rate. We then establish the mathematical connection between the generalized framework and optimal and cybernetic modeling frameworks and derive generalized governing dynamic equations for optimal and cybernetic models. We finally illustrate the influence of various constraints in the cybernetic modeling framework on the optimal growth behavior of microorganisms by solving several dynamic optimization problems using genetic algorithms. (C) 2014 Published by Elsevier Inc.
Resumo:
This article considers a semi-infinite mathematical programming problem with equilibrium constraints (SIMPEC) defined as a semi-infinite mathematical programming problem with complementarity constraints. We establish necessary and sufficient optimality conditions for the (SIMPEC). We also formulate Wolfe- and Mond-Weir-type dual models for (SIMPEC) and establish weak, strong and strict converse duality theorems for (SIMPEC) and the corresponding dual problems under invexity assumptions.
Resumo:
Streamflow forecasts at daily time scale are necessary for effective management of water resources systems. Typical applications include flood control, water quality management, water supply to multiple stakeholders, hydropower and irrigation systems. Conventionally physically based conceptual models and data-driven models are used for forecasting streamflows. Conceptual models require detailed understanding of physical processes governing the system being modeled. Major constraints in developing effective conceptual models are sparse hydrometric gauge network and short historical records that limit our understanding of physical processes. On the other hand, data-driven models rely solely on previous hydrological and meteorological data without directly taking into account the underlying physical processes. Among various data driven models Auto Regressive Integrated Moving Average (ARIMA), Artificial Neural Networks (ANNs) are most widely used techniques. The present study assesses performance of ARIMA and ANNs methods in arriving at one-to seven-day ahead forecast of daily streamflows at Basantpur streamgauge site that is situated at upstream of Hirakud Dam in Mahanadi river basin, India. The ANNs considered include Feed-Forward back propagation Neural Network (FFNN) and Radial Basis Neural Network (RBNN). Daily streamflow forecasts at Basantpur site find use in management of water from Hirakud reservoir. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Mathematics is beautiful and precise and often necessary to understand complex biological phenomena. And yet biologists cannot always hope to fully understand the mathematical foundations of the theory they are using or testing. How then should biologists behave when mathematicians themselves are in dispute? Using the on-going controversy over Hamilton's rule as an example, I argue that biologists should be free to treat mathematical theory with a healthy dose of agnosticism. In doing so biologists should equip themselves with a disclaimer that publicly admits that they cannot entirely attest to the veracity of the mathematics underlying the theory they are using or testing. The disclaimer will only help if it is accompanied by three responsibilities - stay bipartisan in a dispute among mathematicians, stay vigilant and help expose dissent among mathematicians, and make the biology larger than the mathematics. I must emphasize that my goal here is not to take sides in the on-going dispute over the mathematical validity of Hamilton's rule, indeed my goal is to argue that we should refrain from taking sides.
Resumo:
Early afterdepolarizations (EADs), which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1) a normal action potential (AP) with no EADs, (2) an AP with EADs, and (3) an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two-and three-dimensional domains, in the presence of EADs, we find the following wave types: (A) waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves); (B) waves driven only by the L-type calcium current (Ca-mediated waves); (C) phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves) in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model.
Resumo:
We study the dynamical behaviors of two types of spiral-and scroll-wave turbulence states, respectively, in two-dimensional (2D) and three-dimensional (3D) mathematical models, of human, ventricular, myocyte cells that are attached to randomly distributed interstitial fibroblasts; these turbulence states are promoted by (a) the steep slope of the action-potential-duration-restitution (APDR) plot or (b) early afterdepolarizations (EADs). Our single-cell study shows that (1) the myocyte-fibroblast (MF) coupling G(j) and (2) the number N-f of fibroblasts in an MF unit lower the steepness of the APDR slope and eliminate the EAD behaviors of myocytes; we explore the pacing dependence of such EAD suppression. In our 2D simulations, we observe that a spiral-turbulence (ST) state evolves into a state with a single, rotating spiral (RS) if either (a) G(j) is large or (b) the maximum possible number of fibroblasts per myocyte N-f(max) is large. We also observe that the minimum value of G(j), for the transition from the ST to the RS state, decreases as N-f(max) increases. We find that, for the steep-APDR-induced ST state, once the MF coupling suppresses ST, the rotation period of a spiral in the RS state increases as (1) G(j) increases, with fixed N-f(max), and (2) N-f(max) increases, with fixed G(j). We obtain the boundary between ST and RS stability regions in the N-f(max)-G(j) plane. In particular, for low values of N-f(max), the value of G(j), at the ST-RS boundary, depends on the realization of the randomly distributed fibroblasts; this dependence decreases as N-f(max) increases. Our 3D studies show a similar transition from scroll-wave turbulence to a single, rotating, scroll-wave state because of the MF coupling. We examine the experimental implications of our study and propose that the suppression (a) of the steep slope of the APDR or (b) EADs can eliminate spiral-and scroll-wave turbulence in heterogeneous cardiac tissue, which has randomly distributed fibroblasts.
Resumo:
Image and video analysis requires rich features that can characterize various aspects of visual information. These rich features are typically extracted from the pixel values of the images and videos, which require huge amount of computation and seldom useful for real-time analysis. On the contrary, the compressed domain analysis offers relevant information pertaining to the visual content in the form of transform coefficients, motion vectors, quantization steps, coded block patterns with minimal computational burden. The quantum of work done in compressed domain is relatively much less compared to pixel domain. This paper aims to survey various video analysis efforts published during the last decade across the spectrum of video compression standards. In this survey, we have included only the analysis part, excluding the processing aspect of compressed domain. This analysis spans through various computer vision applications such as moving object segmentation, human action recognition, indexing, retrieval, face detection, video classification and object tracking in compressed videos.