937 resultados para parametric decomposition
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
This thesis aims to investigate vibrational characteristics of magnetic resonance elastography (MRE) of the brain. MRE is a promising, non-invasive methodology for the mapping of shear stiffness of the brain. A mechanical actuator shakes the brain and generates shear waves, which are then imaged with a special MRI sequence sensitive to sub-millimeter displacements. This research focuses on exploring the profile of vibrations utilized in brain elastography from the standpoint of ultimately investigating nonlinear behavior of the tissue. The first objective seeks to demonstrate the effects of encoding off-frequency vibrations using standard MRE methodologies. Vibrations of this nature can arise from nonlinearities in the system and contaminate the results of the measurement. The second objective is to probe nonlinearity in the dynamic brain system using MRE. A non-parametric decomposition technique, novel to the MRE field, is introduced and investigated.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
Statically balanced compliant mechanisms require no holding force throughout their range of motion while maintaining the advantages of compliant mechanisms. In this paper, a postbuckled fixed-guided beam is proposed to provide the negative stiffness to balance the positive stiffness of a compliant mechanism. To that end, a curve decomposition modeling method is presented to simplify the large deflection analysis. The modeling method facilitates parametric design insight and elucidates key points on the force-deflection curve. Experimental results validate the analysis. Furthermore, static balancing with fixed-guided beams is demonstrated for a rectilinear proof-of-concept prototype.
Resumo:
In this paper we address the new reduction method called Proper Generalized Decomposition (PGD) which is a discretization technique based on the use of separated representation of the unknown fields, specially well suited for solving multidimensional parametric equations. In this case, it is applied to the solution of dynamics problems. We will focus on the dynamic analysis of an one-dimensional rod with a unit harmonic load of frequency (ω) applied at a point of interest. In what follows, we will present the application of the methodology PGD to the problem in order to approximate the displacement field as the sum of the separated functions. We will consider as new variables of the problem, parameters models associated with the characteristic of the materials, in addition to the frequency. Finally, the quality of the results will be assessed based on an example.
Resumo:
The efficiency literature, both using parametric and non-parametric methods, has been focusing mainly on cost efficiency analysis rather than on profit efficiency. In for-profit organisations, however, the measurement of profit efficiency and its decomposition into technical and allocative efficiency is particularly relevant. In this paper a newly developed method is used to measure profit efficiency and to identify the sources of any shortfall in profitability (technical and/or allocative inefficiency). The method is applied to a set of Portuguese bank branches first assuming long run and then a short run profit maximisation objective. In the long run most of the scope for profit improvement of bank branches is by becoming more allocatively efficient. In the short run most of profit gain can be realised through higher technical efficiency. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.
Resumo:
2000 Mathematics Subject Classification: 94A12, 94A20, 30D20, 41A05.
Resumo:
Finance is one of the fastest growing areas in modern applied mathematics with real world applications. The interest of this branch of applied mathematics is best described by an example involving shares. Shareholders of a company receive dividends which come from the profit made by the company. The proceeds of the company, once it is taken over or wound up, will also be distributed to shareholders. Therefore shares have a value that reflects the views of investors about the likely dividend payments and capital growth of the company. Obviously such value will be quantified by the share price on stock exchanges. Therefore financial modelling serves to understand the correlations between asset and movements of buy/sell in order to reduce risk. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. There are other financial activities and it is not an intention of this paper to discuss all of these activities. The main concern of this paper is to propose a parallel algorithm for the numerical solution of an European option. This paper is organised as follows. First, a brief introduction is given of a simple mathematical model for European options and possible numerical schemes of solving such mathematical model. Second, Laplace transform is applied to the mathematical model which leads to a set of parametric equations where solutions of different parametric equations may be found concurrently. Numerical inverse Laplace transform is done by means of an inversion algorithm developed by Stehfast. The scalability of the algorithm in a distributed environment is demonstrated. Third, a performance analysis of the present algorithm is compared with a spatial domain decomposition developed particularly for time-dependent heat equation. Finally, a number of issues are discussed and future work suggested.
Resumo:
A temperature pause introduced in a simple single-step thermal decomposition of iron, with the presence of silver seeds formed in the same reaction mixture, gives rise to novel compact heterostructures: brick-like Ag@Fe3O4 core-shell nanoparticles. This novel method is relatively easy to implement, and could contribute to overcome the challenge of obtaining a multifunctional heteroparticle in which a noble metal is surrounded by magnetite. Structural analyses of the samples show 4 nm silver nanoparticles wrapped within compact cubic external structures of Fe oxide, with curious rectangular shape. The magnetic properties indicate a near superparamagnetic like behavior with a weak hysteresis at room temperature. The value of the anisotropy involved makes these particles candidates to potential applications in nanomedicine.
Resumo:
Cellulose acetates with different degrees of substitution (DS, from 0.6 to 1.9) were prepared from previously mercerized linter cellulose, in a homogeneous medium, using N,N-dimethylacetamide/lithium chloride as a solvent system. The influence of different degrees of substitution on the properties of cellulose acetates was investigated using thermogravimetric analyses (TGA). Quantitative methods were applied to the thermogravimetric curves in order to determine the apparent activation energy (Ea) related to the thermal decomposition of untreated and mercerized celluloses and cellulose acetates. Ea values were calculated using Broido's method and considering dynamic conditions. Ea values of 158 and 187 kJ mol-1 were obtained for untreated and mercerized cellulose, respectively. A previous study showed that C6OH is the most reactive site for acetylation, probably due to the steric hindrance of C2 and C3. The C6OH takes part in the first step of cellulose decomposition, leading to the formation of levoglucosan and, when it is changed to C6OCOCH3, the results indicate that the mechanism of thermal decomposition changes to one with a lower Ea. A linear correlation between Ea and the DS of the acetates prepared in the present work was identified.
Resumo:
The thermal behavior of two polymorphic forms of rifampicin was studied by DSC and TG/DTG. The thermoanalytical results clearly showed the differences between the two crystalline forms. Polymorph I was the most thermally stable form, the DSC curve showed no fusion for this species and the thermal decomposition process occurred around 245 ºC. The DSC curve of polymorph II showed two consecutive events, an endothermic event (Tpeak = 193.9 ºC) and one exothermic event (Tpeak = 209.4 ºC), due to a melting process followed by recrystallization, which was attributed to the conversion of form II to form I. Isothermal and non-isothermal thermogravimetric methods were used to determine the kinetic parameters of the thermal decomposition process. For non-isothermal experiments, the activation energy (Ea) was derived from the plot of Log β vs 1/T, yielding values for polymorph form I and II of 154 and 123 kJ mol-1, respectively. In the isothermal experiments, the Ea was obtained from the plot of lnt vs 1/T at a constant conversion level. The mean values found for form I and form II were 137 and 144 kJ mol-1, respectively.
Resumo:
The alkali-aggregate reaction (AAR) is a chemical reaction that provokes a heterogeneous expansion of concrete and reduces important properties such as Young's modulus, leading to a reduction in the structure's useful life. In this study, a parametric model is employed to determine the spatial distribution of the concrete expansion, combining normalized factors that influence the reaction through an AAR expansion law. Optimization techniques were employed to adjust the numerical results and observations in a real structure. A three-dimensional version of the model has been implemented in a finite element commercial package (ANSYS(C)) and verified in the analysis of an accelerated mortar test. Comparisons were made between two AAR mathematical descriptions for the mechanical phenomenon, using the same methodology, and an expansion curve obtained from experiment. Some parametric studies are also presented. The numerical results compared very well with the experimental data validating the proposed method.
Resumo:
This work presents the analysis of nonlinear aeroelastic time series from wing vibrations due to airflow separation during wind tunnel experiments. Surrogate data method is used to justify the application of nonlinear time series analysis to the aeroelastic system, after rejecting the chance for nonstationarity. The singular value decomposition (SVD) approach is used to reconstruct the state space, reducing noise from the aeroelastic time series. Direct analysis of reconstructed trajectories in the state space and the determination of Poincare sections have been employed to investigate complex dynamics and chaotic patterns. With the reconstructed state spaces, qualitative analyses may be done, and the attractors evolutions with parametric variation are presented. Overall results reveal complex system dynamics associated with highly separated flow effects together with nonlinear coupling between aeroelastic modes. Bifurcations to the nonlinear aeroelastic system are observed for two investigations, that is, considering oscillations-induced aeroelastic evolutions with varying freestream speed, and aeroelastic evolutions at constant freestream speed and varying oscillations. Finally, Lyapunov exponent calculation is proceeded in order to infer on chaotic behavior. Poincare mappings also suggest bifurcations and chaos, reinforced by the attainment of maximum positive Lyapunov exponents. Copyright (C) 2009 F. D. Marques and R. M. G. Vasconcellos.
Resumo:
Uncertainties in damping estimates can significantly affect the dynamic response of a given flexible structure. A common practice in linear structural dynamics is to consider a linear viscous damping model as the major energy dissipation mechanism. However, it is well known that different forms of energy dissipation can affect the structure's dynamic response. The major goal of this paper is to address the effects of the turbulent frictional damping force, also known as drag force on the dynamic behavior of a typical flexible structure composed of a slender cantilever beam carrying a lumped-mass on the tip. First, the system's analytical equation is obtained and solved by employing a perturbation technique. The solution process considers variations of the drag force coefficient and its effects on the system's response. Then, experimental results are presented to demonstrate the effects of the nonlinear quadratic damping due to the turbulent frictional force on the system's dynamic response. In particular, the effects of the quadratic damping on the frequency-response and amplitude-response curves are investigated. Numerically simulated as well as experimental results indicate that variations on the drag force coefficient significantly alter the dynamics of the structure under investigation. Copyright (c) 2008 D. G. Silva and P. S. Varoto.