901 resultados para closed-form solution
Resumo:
The capital structure and regulation of financial intermediaries is an important topic for practitioners, regulators and academic researchers. In general, theory predicts that firms choose their capital structures by balancing the benefits of debt (e.g., tax and agency benefits) against its costs (e.g., bankruptcy costs). However, when traditional corporate finance models have been applied to insured financial institutions, the results have generally predicted corner solutions (all equity or all debt) to the capital structure problem. This paper studies the impact and interaction of deposit insurance, capital requirements and tax benefits on a bankÇs choice of optimal capital structure. Using a contingent claims model to value the firm and its associated claims, we find that there exists an interior optimal capital ratio in the presence of deposit insurance, taxes and a minimum fixed capital standard. Banks voluntarily choose to maintain capital in excess of the minimum required in order to balance the risks of insolvency (especially the loss of future tax benefits) against the benefits of additional debt. Because we derive a closed- form solution, our model provides useful insights on several current policy debates including revisions to the regulatory framework for GSEs, tax policy in general and the tax exemption for credit unions.
Resumo:
This paper studies the problem of determining the position of beacon nodes in Local Positioning Systems (LPSs), for which there are no inter-beacon distance measurements available and neither the mobile node nor any of the stationary nodes have positioning or odometry information. The common solution is implemented using a mobile node capable of measuring its distance to the stationary beacon nodes within a sensing radius. Many authors have implemented heuristic methods based on optimization algorithms to solve the problem. However, such methods require a good initial estimation of the node positions in order to find the correct solution. In this paper we present a new method to calculate the inter-beacon distances, and hence the beacons positions, based in the linearization of the trilateration equations into a closed-form solution which does not require any approximate initial estimation. The simulations and field evaluations show a good estimation of the beacon node positions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
The energy balancing capability of cooperative communication is utilized to solve the energy hole problem in wireless sensor networks. We first propose a cooperative transmission strategy, where intermediate nodes participate in two cooperative multi-input single-output (MISO) transmissions with the node at the previous hop and a selected node at the next hop, respectively. Then, we study the optimization problems for power allocation of the cooperative transmission strategy by examining two different approaches: network lifetime maximization (NLM) and energy consumption minimization (ECM). For NLM, the numerical optimal solution is derived and a searching algorithm for suboptimal solution is provided when the optimal solution does not exist. For ECM, a closed-form solution is obtained. Numerical and simulation results show that both the approaches have much longer network lifetime than SISO transmission strategies and other cooperative communication schemes. Moreover, NLM which features energy balancing outperforms ECM which focuses on energy efficiency, in the network lifetime sense.
Resumo:
Purpose - This research note aims to present a summary of research concerning economic-lot scheduling problem (ELSP). Design/methodology/approach - The paper's approach is to review over 100 selected studies published in the last 15 years (1997-2012), which are then grouped under different research themes. Findings - Five research themes are identified and insights for future studies are reported at the end of this paper. Research limitations/implications - The motivation of preparing this research note is to summarize key research studies in this field since 1997, when the ELSP problems have been verified as NP-hard. Originality/value - ELSP is an important scheduling problem that has been studied since the 1950s. Because of its complexity in delivering a feasible analytical closed form solution, many studies in the last two decades employed heuristic algorithms in order to come up with good and acceptable solutions. As a consequence, the solution approaches are quite diversified. The major contribution of this paper is to provide researchers who are interested in this area with a quick reference guide on the reviewed studies. © Emerald Group Publishing Limited.
Resumo:
In this paper, we use the quantum Jensen-Shannon divergence as a means of measuring the information theoretic dissimilarity of graphs and thus develop a novel graph kernel. In quantum mechanics, the quantum Jensen-Shannon divergence can be used to measure the dissimilarity of quantum systems specified in terms of their density matrices. We commence by computing the density matrix associated with a continuous-time quantum walk over each graph being compared. In particular, we adopt the closed form solution of the density matrix introduced in Rossi et al. (2013) [27,28] to reduce the computational complexity and to avoid the cumbersome task of simulating the quantum walk evolution explicitly. Next, we compare the mixed states represented by the density matrices using the quantum Jensen-Shannon divergence. With the quantum states for a pair of graphs described by their density matrices to hand, the quantum graph kernel between the pair of graphs is defined using the quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets from both bioinformatics and computer vision. The experimental results demonstrate the effectiveness of the proposed quantum graph kernel.
Resumo:
2010 Mathematics Subject Classification: 94A17, 62B10, 62F03.
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
It is shown that the a;P?lication of the Poincare-Bertrand fcm~ulaw hen made in a suitable manner produces the s~lutiano f certain singular integral equations very quickly, thc method of arriving at which, otherwise, is too complicaled. Two singular integral equations are considered. One of these quaiions is with a Cauchy-tyge kcrnel arid the other is an equalion which appears in the a a w guide theory and the theory of dishcations. Adifferent approach i? alw made here to solve the singular integralquation> of the waveguide theor? ind this i ~ v o l v eth~e use of the inversion formula of the Cauchy-type singular integral equahn and dudion to a system of TIilberl problems for two unknowns which can be dwupled wry easily to obi& tbe closed form solutim of the irilegral equatlou at band. The methods of the prescnt paper avoid all the complicaled approaches of solving the singular integral equaticn of the waveguide theory knowr todate.
Resumo:
Under certain specific assumption it has been observed that the basic equations of magneto-elasticity in the case of plane deformation lead to a biharmonic equation, as in the case of the classical plane theory of elasticity. The method of solving boundary value problems has been properly modified and a unified approach in solving such problems has been suggested with special reference to problems relating thin infinite plates with a hole. Closed form expressions have been obtained for the stresses due to a uniform magnetic field present in the plane of deformation of a thin infinite conducting plate with a circular hole, the plate being deformed by a tension acting parallel to the direction of the magnetic field.
Resumo:
An analytical surface-ray tracing has been carried out for the prolate ellipsoid of revolution using a novel geodesic constant method. This method yields closed form expressions for all the ray-geometric parameters required for the UTD mutual coupling calculations for the antennas located arbitrarily in three dimensions, on the ellipsoid of revolution.
Resumo:
Two mixed boundary value problems associated with two-dimensional Laplace equation, arising in the study of scattering of surface waves in deep water (or interface waves in two superposed fluids) in the linearised set up, by discontinuities in the surface (or interface) boundary conditions, are handled for solution by the aid of the Weiner-Hopf technique applied to a slightly more general differential equation to be solved under general boundary conditions and passing on to the limit in a manner so as to finally give rise to the solutions of the original problems. The first problem involves one discontinuity while the second problem involves two discontinuities. The reflection coefficient is obtained in closed form for the first problem and approximately for the second. The behaviour of the reflection coefficient for both the problems involving deep water against the incident wave number is depicted in a number of figures. It is observed that while the reflection coefficient for the first problem steadily increases with the wave number, that for the second problem exhibits oscillatory behaviour and vanishes at some discrete values of the wave number. Thus, there exist incident wave numbers for which total transmission takes place for the second problem. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Closed-form analytical expressions are derived for the reflection and transmission coefficients for the problem of scattering of surface water waves by a sharp discontinuity in the surface-boundary-conditions, for the case of deep water. The method involves the use of the Havelock-type expansion of the velocity potential along with an analysis to solve a Carleman-type singular integral equation over a semi-infinite range. This method of solution is an alternative to the Wiener-Hopf technique used previously.