903 resultados para Algebraic path formulation
Resumo:
This paper describes an algorithm for ``direct numerical integration'' of the initial value Differential-Algebraic Inequalities (DAI) in a time stepping fashion using a sequential quadratic programming (SQP) method solver for detecting and satisfying active path constraints at each time step. The activation of a path constraint generally increases the condition number of the active discretized differential algebraic equation's (DAE) Jacobian and this difficulty is addressed by a regularization property of the alpha method. The algorithm is locally stable when index 1 and index 2 active path constraints and bounds are active. Subject to available regularization it is seen to be stable for active index 3 active path constraints in the numerical examples. For the high index active path constraints, the algorithm uses a user-selectable parameter to perturb the smaller singular values of the Jacobian with a view to reducing the condition number so that the simulation can proceed. The algorithm can be used as a relatively cheaper estimation tool for trajectory and control planning and in the context of model predictive control solutions. It can also be used to generate initial guess values of optimization variables used as input to inequality path constrained dynamic optimization problems. The method is illustrated with examples from space vehicle trajectory and robot path planning.
Resumo:
This study explores the role and nature of knowledge management (KM) in small and medium-sized companies (SMEs). Even though the role of knowledge as a competitive advantage is commonly recognized in the SME sector, almost no attention has been paid to the managing and developing of knowledge in SMEs. This thesis consists of three different sub-studies that were reported in four individual essays. The results of the questionnaire study indicate that nearly all companies that responded to the questionnaire (N = 108) found intangible assets, i.e. knowledge resources to be their main source of competitive advantage. However, only less than a third of the companies actively deal with knowledge management. The results also indicate a significant correlation between activity in knowledge management and sustainable organic growth of the company. The interview study (N = 10) explored the context and motives of the SMEs for managing their intangible assets, and the concrete practices of knowledge management. It turned out that KM facilitated change management, clarification of the vision and new strategy formulation. All the interviewed companies were aiming at improved innovation process, new ways of doing business and attaining an increased “knowledge focus” in their business. Nearly all also aspired to grow significantly. Thus, KM provides a strategy for these SMEs to guarantee their survival and sustainability in the turbulent markets. The action research was a process to assess and develop intangible resources in three companies. The experienced benefits were the clarification of future focus and strategy, creation of a common language to discuss strategic issues within the company, as well as improved balance of different categories of intangible assets. After the process all the case companies had developed in the chosen key areas. Thus, by systematic knowledge management the implementation of new strategic orientation (knowledge focusing) was facilitated. The findings can be summarized in two main points. First, knowledge management seems to serve the purpose of change, renewal and new strategic orientation in the SMEs. It also seems to be closely related to organic growth and innovation. All of these factors can be considered dimensions of entrepreneurship. Second, the conscious development of intangible assets can increase the balance of different categories of intangible assets and the overall knowledge focusing of business. In the case companies, this in turn facilitated the path to the improved overall performance.
Resumo:
Fujikawa's method of evaluating the anomalies is extended to the on-shell supersymmetric (SUSY) theories. The supercurrent and the superconformal current anomalies are evaluated for the Wess-Zumino model using the background-field formulation and heat-kernel regularization. We find that the regularized Jacobians for SUSY and superconformal transformations are finite. The results can be expressed in a form such that there is no supercurrent anomaly but a finite nonzero superconformal anomaly, in agreement with similar results obtained using other methods.
Resumo:
Conditions for quantum topological invariance of classically topological field theories in the path integral formulation are discussed. Both the three-dimensional Chern-Simons system and a Witten-type topological field theory are shown to satisfy these conditions.
Resumo:
The use of algebraic techniques to solve combinatorial problems is studied in this paper. We formulate the rainbow connectivity problem as a system of polynomial equations. We first consider the case of two colors for which the problem is known to be hard and we then extend the approach to the general case. We also present a formulation of the rainbow connectivity problem as an ideal membership problem.
Resumo:
Cache analysis plays a very important role in obtaining precise Worst Case Execution Time (WCET) estimates of programs for real-time systems. While Abstract Interpretation based approaches are almost universally used for cache analysis, they fail to take advantage of its unique requirement: it is not necessary to find the guaranteed cache behavior that holds across all executions of a program. We only need the cache behavior along one particular program path, which is the path with the maximum execution time. In this work, we introduce the concept of cache miss paths, which allows us to use the worst-case path information to improve the precision of AI-based cache analysis. We use Abstract Interpretation to determine the cache miss paths, and then integrate them in the IPET formulation. An added advantage is that this further allows us to use infeasible path information for cache analysis. Experimentally, our approach gives more precise WCETs as compared to AI-based cache analysis, and we also provide techniques to trade-off analysis time with precision to provide scalability.
Resumo:
A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, diattenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.
Resumo:
The algebraic expressions for the anharmonic contributions to the Debye-Waller factor up to 0(A ) and 0 L% ) £ where ^ is the scattering wave-vector] have been derived in a form suitable for cubic metals with small ion cores where the interatomic potential extends to many neighbours. This has been achieved in terms of various wave-vector dependent tensors, following the work of Shukla and Taylor (1974) on the cubic anharmonic Helmholtz free energy. The contribution to the various wave-vector dependent tensors from the coulomb and the electron-ion terms in the interatomic metallic potential has been obtained by the Ewald procedure. All the restricted multiple whole B r i l l o u i n zone (B.Z.) sums are reduced to single whole B.Z. sums by using the plane wave representation of the delta function. These single whole B.Z. sums are further reduced to the •%?? portion of the B.Z. following Shukla and Wilk (1974) and Shukla and Taylor (1974). Numerical calculations have been performed for sodium where the Born-Mayer term in the interatomic potential has been neglected because i t is small £ Vosko (1964)3 • *n o^er to compare our calculated results with the experimental results of Dawton (1937), we have also calculated the r a t io of the intensities at different temperatures for the lowest five reflections (110), (200), (220), (310) and (400) . Our calculated quasi-harmonic results agree reasonably well with the experimental results at temperatures (T) of the order of the Debye temperature ( 0 ). For T » © ^ 9 our calculated anharmonic results are found to be in good agreement with the experimental results.The anomalous terms in the Debye-Waller factor are found not to be negligible for certain reflections even for T ^ ©^ . At temperature T yy Op 9 where the temperature is of the order of the melting temperature (Xm) » "the anomalous terms are found to be important almost for all the f i ve reflections.
Resumo:
It is a well known result that the Feynman's path integral (FPI) approach to quantum mechanics is equivalent to Schrodinger's equation when we use as integration measure the Wiener-Lebesgue measure. This results in little practical applicability due to the great algebraic complexibity involved, and the fact is that almost all applications of (FPI) - ''practical calculations'' - are done using a Riemann measure. In this paper we present an expansion to all orders in time of FPI in a quest for a representation of the latter solely in terms of differentiable trajetories and Riemann measure. We show that this expansion agrees with a similar expansion obtained from Schrodinger's equation only up to first order in a Riemann integral context, although by chance both expansions referred to above agree for the free. particle and harmonic oscillator cases. Our results permit, from the mathematical point of view, to estimate the many errors done in ''practical'' calculations of the FPI appearing in the literature and, from the physical point of view, our results supports the stochastic approach to the problem.
Resumo:
A rapid and simple method for procaine determination was developed by flow injection analysis (FIA) using a screen-printed carbon electrode (SPCE) as amperometric detector. The present method is based on the amine/hydroxylamine oxidation from procaine monitored at 0.80 V on SPCE in sodium acetate solution pH 6.0. Using the best experimental conditions assigned as: pH 6.0, flow rate of 3.8 mL min(-1), sample volume of 100 mu L and analytical path of 30 cm it is possible to construct a linear calibration curve from 9.0 x 10(-6) to 1.0 x 10(-4) mol L-1. The relative standard deviation for 5.0 x 10(-5) mol L-1 procaine (15 repetitions using the same electrode) is 3.2% and detection limit calculated is 6.0 x 10(-6) mol L-1. Recoveries obtained for procaine gave a mean values from 94.8 to 102.3% and an analytical frequency of 36 injections per hour was achieved. The method was successfully applied for the determination of procaine in pharmaceutical formulation without any pre-treatment, which are in good accordance with the declared values of manufacturer and an official method based on spectrophotometric analysis. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A boundary element method (BEM) formulation to predict the behavior of solids exhibiting displacement (strong) discontinuity is presented. In this formulation, the effects of the displacement jump of a discontinuity interface embedded in an internal cell are reproduced by an equivalent strain field over the cell. To compute the stresses, this equivalent strain field is assumed as the inelastic part of the total strain. As a consequence, the non-linear BEM integral equations that result from the proposed approach are similar to those of the implicit BEM based on initial strains. Since discontinuity interfaces can be introduced inside the cell independently on the cell boundaries, the proposed BEM formulation, combined with a tracking scheme to trace the discontinuity path during the analysis, allows for arbitrary discontinuity propagation using a fixed mesh. A simple technique to track the crack path is outlined. This technique is based on the construction of a polygonal line formed by segments inside the cells, in which the assumed failure criterion is reached. Two experimental concrete fracture tests were analyzed to assess the performance of the proposed formulation.
Resumo:
An affine sl(n + 1) algebraic construction of the basic constrained KP hierarchy is presented. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and it is shown that these approaches are equivalent. The model is recognized to be the generalized non-linear Schrödinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Bäcklund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. Our construction uncovers the origin of the Toda lattice structure behind the latter hierarchy. © 1995 American Institute of Physics.
Resumo:
This work presents an application of a Boundary Element Method (BEM) formulation for anisotropic body analysis using isotropic fundamental solution. The anisotropy is considered by expressing a residual elastic tensor as the difference of the anisotropic and isotropic elastic tensors. Internal variables and cell discretization of the domain are considered. Masonry is a composite material consisting of bricks (masonry units), mortar and the bond between them and it is necessary to take account of anisotropy in this type of structure. The paper presents the formulation, the elastic tensor of the anisotropic medium properties and the algebraic procedure. Two examples are shown to validate the formulation and good agreement was obtained when comparing analytical and numerical results. Two further examples in which masonry walls were simulated, are used to demonstrate that the presented formulation shows close agreement between BE numerical results and different Finite Element (FE) models. © 2012 Elsevier Ltd.
Resumo:
In general, pattern recognition techniques require a high computational burden for learning the discriminating functions that are responsible to separate samples from distinct classes. As such, there are several studies that make effort to employ machine learning algorithms in the context of big data classification problems. The research on this area ranges from Graphics Processing Units-based implementations to mathematical optimizations, being the main drawback of the former approaches to be dependent on the graphic video card. Here, we propose an architecture-independent optimization approach for the optimum-path forest (OPF) classifier, that is designed using a theoretical formulation that relates the minimum spanning tree with the minimum spanning forest generated by the OPF over the training dataset. The experiments have shown that the approach proposed can be faster than the traditional one in five public datasets, being also as accurate as the original OPF. (C) 2014 Elsevier B. V. All rights reserved.
Resumo:
This paper aims to contribute to the three-dimensional generalization of numerical prediction of crack propagation through the formulation of finite elements with embedded discontinuities. The analysis of crack propagation in two-dimensional problems yields lines of discontinuity that can be tracked in a relatively simple way through the sequential construction of straight line segments oriented according to the direction of failure within each finite element in the solid. In three-dimensional analysis, the construction of the discontinuity path is more complex because it requires the creation of plane surfaces within each element, which must be continuous between the elements. In the method proposed by Chaves (2003) the crack is determined by solving a problem analogous to the heat conduction problem, established from local failure orientations, based on the stress state of the mechanical problem. To minimize the computational effort, in this paper a new strategy is proposed whereby the analysis for tracking the discontinuity path is restricted to the domain formed by some elements near the crack surface that develops along the loading process. The proposed methodology is validated by performing three-dimensional analyses of basic problems of experimental fractures and comparing their results with those reported in the literature.