962 resultados para Discrete boundary value problems
Resumo:
Purpose - To provide an example of the use of system dynamics within the context of a discrete-event simulation study. Design/methodology/approach - A discrete-event simulation study of a production-planning facility in a gas cylinder-manufacturing plant is presented. The case study evidence incorporates questionnaire responses from sales managers involved in the order-scheduling process. Findings - As the project progressed it became clear that, although the discrete-event simulation would meet the objectives of the study in a technical sense, the organizational problem of "delivery performance" would not be solved by the discrete-event simulation study alone. The case shows how the qualitative outcomes of the discrete-event simulation study led to an analysis using the system dynamics technique. The system dynamics technique was able to model the decision-makers in the sales and production process and provide a deeper understanding of the performance of the system. Research limitations/implications - The case study describes a traditional discrete-event simulation study which incorporated an unplanned investigation using system dynamics. Further, case studies using a planned approach to showing consideration of organizational issues in discrete-event simulation studies are required. Then the role of both qualitative data in a discrete-event simulation study and the use of supplementary tools which incorporate organizational aspects may help generate a methodology for discrete-event simulation that incorporates human aspects and so improve its relevance for decision making. Practical implications - It is argued that system dynamics can provide a useful addition to the toolkit of the discrete-event simulation practitioner in helping them incorporate a human aspect in their analysis. Originality/value - Helps decision makers gain a broader perspective on the tools available to them by showing the use of system dynamics to supplement the use of discrete-event simulation. © Emerald Group Publishing Limited.
Resumo:
There has been a revival of interest in economic techniques to measure the value of a firm through the use of economic value added as a technique for measuring such value to shareholders. This technique, based upon the concept of economic value equating to total value, is founded upon the assumptions of classical liberal economic theory. Such techniques have been subject to criticism both from the point of view of the level of adjustment to published accounts needed to make the technique work and from the point of view of the validity of such techniques in actually measuring value in a meaningful context. This paper critiques economic value added techniques as a means of calculating changes in shareholder value, contrasting such techniques with more traditional techniques of measuring value added. It uses the company Severn Trent plc as an actual example in order to evaluate and contrast the techniques in action. The paper demonstrates discrepancies between the calculated results from using economic value added analysis and those reported using conventional accounting measures. It considers the merits of the respective techniques in explaining shareholder and managerial behaviour and the problems with using such techniques in considering the wider stakeholder concept of value. It concludes that this economic value added technique has merits when compared with traditional accounting measures of performance but that it does not provide the universal panacea claimed by its proponents.
Resumo:
The spatial pattern of discrete beta-amyloid (A beta) deposits was studied in the superficial laminae of cortical fields of different types and in the hippocampus in 6 cases of Alzheimer's disease (AD). In 41/42 tissues examined, discrete A beta deposits were aggregated into clusters and in 34/41 tissues (25/34 of the cortical tissues), there was evidence for a regular periodicity of the A beta deposit clusters parallel to the tissue boundary. The dimensions of the clusters varied from 400 to > 12,800 microns in different tissues. Although the A beta deposit clusters were larger than predicted, the regular periodicity suggests that they develop in relation to groups of cells associated with specific projections. This would be consistent with the hypothesis that the distribution of discrete A beta deposits in AD could reflect progressive synaptic disconnection along interconnected neuronal pathways. This implies that amyloid deposition could be a response to, rather than a cause of, synaptic disconnection in AD.
Resumo:
In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Resumo:
This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.
Resumo:
Particulate solids are complex redundant systems which consist of discrete particles. The interactions between the particles are complex and have been the subject of many theoretical and experimental investigations. Invetigations of particulate material have been restricted by the lack of quantitative information on the mechanisms occurring within an assembly. Laboratory experimentation is limited as information on the internal behaviour can only be inferred from measurements on the assembly boundary, or the use of intrusive measuring devices. In addition comparisons between test data are uncertain due to the difficulty in reproducing exact replicas of physical systems. Nevertheless, theoretical and technological advances require more detailed material information. However, numerical simulation affords access to information on every particle and hence the micro-mechanical behaviour within an assembly, and can replicate desired systems. To use a computer program to numerically simulate material behaviour accurately it is necessary to incorporte realistic interaction laws. This research programme used the finite difference simulation program `BALL', developed by Cundall (1971), which employed linear spring force-displacement laws. It was thus necessary to incorporate more realistic interaction laws. Therefore, this research programme was primarily concerned with the implementation of the normal force-displacement law of Hertz (1882) and the tangential force-displacement laws of Mindlin and Deresiewicz (1953). Within this thesis the contact mechanics theories employed in the program are developed and the adaptations which were necessary to incorporate these laws are detailed. Verification of the new contact force-displacement laws was achieved by simulating a quasi-static oblique contact and single particle oblique impact. Applications of the program to the simulation of large assemblies of particles is given, and the problems in undertaking quasi-static shear tests along with the results from two successful shear tests are described.
Resumo:
The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.
Resumo:
Purpose: Our study explores the mediating role of discrete emotions in the relationships between employee perceptions of distributive and procedural injustice, regarding an annual salary raise, and counterproductive work behaviors (CWBs). Design/Methodology/Approach: Survey data were provided by 508 individuals from telecom and IT companies in Pakistan. Confirmatory factor analysis, structural equation modeling, and bootstrapping were used to test our hypothesized model. Findings: We found a good fit between the data and our tested model. As predicted, anger (and not sadness) was positively related to aggressive CWBs (abuse against others and production deviance) and fully mediated the relationship between perceived distributive injustice and these CWBs. Against predictions, however, neither sadness nor anger was significantly related to employee withdrawal. Implications: Our findings provide organizations with an insight into the emotional consequences of unfair HR policies, and the potential implications for CWBs. Such knowledge may help employers to develop training and counseling interventions that support the effective management of emotions at work. Our findings are particularly salient for national and multinational organizations in Pakistan. Originality/Value: This is one of the first studies to provide empirical support for the relationships between in/justice, discrete emotions and CWBs in a non-Western (Pakistani) context. Our study also provides new evidence for the differential effects of outward/inward emotions on aggressive/passive CWBs. © 2012 Springer Science+Business Media, LLC.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
We propose and investigate a method for the stable determination of a harmonic function from knowledge of its value and its normal derivative on a part of the boundary of the (bounded) solution domain (Cauchy problem). We reformulate the Cauchy problem as an operator equation on the boundary using the Dirichlet-to-Neumann map. To discretize the obtained operator, we modify and employ a method denoted as Classic II given in [J. Helsing, Faster convergence and higher accuracy for the Dirichlet–Neumann map, J. Comput. Phys. 228 (2009), pp. 2578–2576, Section 3], which is based on Fredholm integral equations and Nyström discretization schemes. Then, for stability reasons, to solve the discretized integral equation we use the method of smoothing projection introduced in [J. Helsing and B.T. Johansson, Fast reconstruction of harmonic functions from Cauchy data using integral equation techniques, Inverse Probl. Sci. Eng. 18 (2010), pp. 381–399, Section 7], which makes it possible to solve the discretized operator equation in a stable way with minor computational cost and high accuracy. With this approach, for sufficiently smooth Cauchy data, the normal derivative can also be accurately computed on the part of the boundary where no data is initially given.
Resumo:
In this paper, free surface problems of Stefan-type for the parabolic heat equation are investigated using the method of fundamental solutions. The additional measurement necessary to determine the free surface could be a boundary temperature, a heat flux or an energy measurement. Both one- and two-phase flows are investigated. Numerical results are presented and discussed.
Resumo:
We consider a Cauchy problem for the heat equation, where the temperature field is to be reconstructed from the temperature and heat flux given on a part of the boundary of the solution domain. We employ a Landweber type method proposed in [2], where a sequence of mixed well-posed problems are solved at each iteration step to obtain a stable approximation to the original Cauchy problem. We develop an efficient boundary integral equation method for the numerical solution of these mixed problems, based on the method of Rothe. Numerical examples are presented both with exact and noisy data, showing the efficiency and stability of the proposed procedure and approximations.
Resumo:
The shape of a plane acoustical sound-soft obstacle is detected from knowledge of the far field pattern for one time-harmonic incident field. Two methods based on solving a system of integral equations for the incoming wave and the far field pattern are investigated. Properties of the integral operators required in order to apply regularization, i.e. injectivity and denseness of the range, are proved.
Resumo:
Potential applications of high-damping and high-stiffness composites have motivated extensive research on the effects of negative-stiffness inclusions on the overall properties of composites. Recent theoretical advances have been based on the Hashin-Shtrikman composite models, one-dimensional discrete viscoelastic systems and a two-dimensional nested triangular viscoelastic network. In this paper, we further analyze the two-dimensional triangular structure containing pre-selected negative-stiffness components to study its underlying deformation mechanisms and stability. Major new findings are structure-deformation evolution with respect to the magnitude of negative stiffness under shear loading and the phenomena related to dissipation-induced destabilization and inertia-induced stabilization, according to Lyapunov stability analysis. The evolution shows strong correlations between stiffness anomalies and deformation modes. Our stability results reveal that stable damping peaks, i.e. stably extreme effective damping properties, are achievable under hydrostatic loading when the inertia is greater than a critical value. Moreover, destabilization induced by elemental damping is observed with the critical inertia. Regardless of elemental damping, when the inertia is less than the critical value, a weaker system instability is identified.