934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation


Relevância:

50.00% 50.00%

Publicador:

Resumo:

A numerical method based on integral equations is proposed and investigated for the Cauchy problem for the Laplace equation in 3-dimensional smooth bounded doubly connected domains. To numerically reconstruct a harmonic function from knowledge of the function and its normal derivative on the outer of two closed boundary surfaces, the harmonic function is represented as a single-layer potential. Matching this representation against the given data, a system of boundary integral equations is obtained to be solved for two unknown densities. This system is rewritten over the unit sphere under the assumption that each of the two boundary surfaces can be mapped smoothly and one-to-one to the unit sphere. For the discretization of this system, Weinert’s method (PhD, Göttingen, 1990) is employed, which generates a Galerkin type procedure for the numerical solution, and the densities in the system of integral equations are expressed in terms of spherical harmonics. Tikhonov regularization is incorporated, and numerical results are included showing the efficiency of the proposed procedure.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This dissertation analyzes hospital efficiency using various econometric techniques. The first essay provides additional and recent evidence to the presence of contract management behavior in the U.S. hospital industry. Unlike previous studies, which focus on either an input-demand equation or the cost function of the firm, this paper estimates the two jointly using a system of nonlinear equations. Moreover, it addresses the longitudinal problem of institutions adopting contract management in different years, by creating a matched control group of non-adopters with the same longitudinal distribution as the group under study. The estimation procedure then finds that labor, and not capital, is the preferred input in U.S. hospitals regardless of managerial contract status. With institutions that adopt contract management benefiting from lower labor inefficiencies than the simulated non-contract adopters. These results suggest that while there is a propensity for expense preference behavior towards the labor input, contract managed firms are able to introduce efficiencies over conventional, owner controlled, firms. Using data for the years 1998 through 2007, the second essay investigates the production technology and cost efficiency faced by Florida hospitals. A stochastic frontier multiproduct cost function is estimated in order to test for economies of scale, economies of scope, and relative cost efficiencies. The results suggest that small-sized hospitals experience economies of scale, while large and medium sized institutions do not. The empirical findings show that Florida hospitals enjoy significant scope economies, regardless of size. Lastly, the evidence suggests that there is a link between hospital size and relative cost efficiency. The results of the study imply that state policy makers should be focused on increasing hospital scale for smaller institutions while facilitating the expansion of multiproduct production for larger hospitals. The third and final essay employs a two staged approach in analyzing the efficiency of hospitals in the state of Florida. In the first stage, the Banker, Charnes, and Cooper model of Data Envelopment Analysis is employed in order to derive overall technical efficiency scores for each non-specialty hospital in the state. Additionally, input slacks are calculated and reported in order to identify the factors of production that each hospital may be over utilizing. In the second stage, we employ a Tobit regression model in order to analyze the effects a number of structural, managerial, and environmental factors may have on a hospital’s efficiency. The results indicated that most non-specialty hospitals in the state are operating away from the efficient production frontier. The results also indicate that the structural make up, managerial choices, and level of competition Florida hospitals face have an impact on their overall technical efficiency.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Smoothing the potential energy surface for structure optimization is a general and commonly applied strategy. We propose a combination of soft-core potential energy functions and a variation of the diffusion equation method to smooth potential energy surfaces, which is applicable to complex systems such as protein structures; The performance of the method was demonstrated by comparison with simulated annealing using the refinement of the undecapeptide Cyclosporin A as a test case. Simulations were repeated many times using different initial conditions and structures since the methods are heuristic and results are only meaningful in a statistical sense.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the present study was to test a hypothetical model to examine if dispositional optimism exerts a moderating or a mediating effect between personality traits and quality of life, in Portuguese patients with chronic diseases. A sample of 540 patients was recruited from central hospitals in various districts of Portugal. All patients completed self-reported questionnaires assessing socio-demographic and clinical variables, personality, dispositional optimism, and quality of life. Structural equation modeling (SEM) was used to analyze the moderating and mediating effects. Results suggest that dispositional optimism exerts a mediator rather than a moderator role between personality traits and quality of life, suggesting that “the expectation that good things will happen” contributes to a better general well-being and better mental functioning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the existence and multiplicity of positive radial solutions of the Dirichlet problem for the Minkowski-curvature equation { -div(del upsilon/root 1-vertical bar del upsilon vertical bar(2)) in B-R, upsilon=0 on partial derivative B-R,B- where B-R is a ball in R-N (N >= 2). According to the behaviour off = f (r, s) near s = 0, we prove the existence of either one, two or three positive solutions. All results are obtained by reduction to an equivalent non-singular one-dimensional problem, to which variational methods can be applied in a standard way.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We compared the indirect immunofluorescence assay (IFA) with Western blot (Wb) as a confirmatory method to detect antibodies anti retrovirus (HIV-1 and HTLV-I/II). Positive and negative HIV-1 and HTLV-I/II serum samples from different risk populations were studied. Sensitivity, specificity, positive, negative predictive and kappa index values were assayed, to assess the IFA efficiency versus Wb. The following cell lines were used as a source of viral antigens: H9 ( HTLV-III b); MT-2 and MT-4 (persistently infected with HTLV-I) and MO-T (persistently infected with HTLV-II). Sensitivity and specificity rates for HIV-1 were 96.80% and 98.60% respectively, while predictive positive and negative values were 99.50% and 92.00% respectively. No differences were found in HIV IFA performance between the various populations studied. As for IFA HTLV system, the sensitivity and specificity values were 97.91% and 100% respectively with positive and negative predictive values of 100% and 97.92%. Moreover, the sensitivity of the IFA for HTLV-I/II proved to be higher when the samples were tested simultaneously against both antigens (HTLV-I-MT-2 and HTLV-II-MO-T). The overall IFA efficiency for HIV-1 and HTLV-I/II-MT-2 antibody detection probed to be very satisfactory with an excellent correlation with Wb (Kappa indexes 0.93 and 0.98 respectively). These results confirmed that the IFA is a sensitive and specific alternative method for the confirmatory diagnosis of HIV-1 and HTLV-I/II infection in populations at different levels of risk to acquire the infection and suggest that IFA could be included in the serologic diagnostic algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Atualmente, o parque edificado é responsável pelo consumo de 40% da energia total consumida em toda a União Europeia. As previsões apontam para o crescimento do sector da construção civil, nomeadamente a construção de edifícios, o que permite perspetivar um aumento do consumo de energia nesta área. Medidas importantes, como o lançamento da Diretiva 2010/31/EU do Parlamento Europeu e do Conselho de 19 de Maio de 2010 relativa ao desempenho energético dos edifícios, abrem caminho para a diminuição das necessidades energéticas e emissões de gases de efeito de estufa. Nela são apontados objetivos para aumentar a eficiência energética do parque edificado, tendo como objetivo que a partir de 2020 todos os novos edifícios sejam energeticamente eficientes e de balanço energético quase zero, com principal destaque para a compensação usando produção energética própria proveniente de fontes renováveis. Este novo requisito, denominado nearly zero energy building, apresenta-se como um novo incentivo no caminho para a sustentabilidade energética. As técnicas e tecnologias usadas na conceção dos edifícios terão um impacto positivo na análise de ciclo de vida, nomeadamente na minimização do impacto ambiental e na racionalização do consumo energético. Desta forma, pretendeu-se analisar a aplicabilidade do conceito nearly zero energy building a um grande edifício de serviços e o seu impacto em termos de ciclo de vida a 50 anos. Partindo da análise de alguns estudos sobre o consumo energético e sobre edifícios de balanço energético quase nulo já construídos em Portugal, desenvolveu-se uma análise de ciclo de vida para o caso de um edifício de serviços, da qual resultou um conjunto de propostas de otimização da sua eficiência energética e de captação de energias renováveis. As medidas apresentadas foram avaliadas com o auxílio de diferentes aplicações como DIALux, IES VE e o PVsyst, com o objetivo de verificar o seu impacto através da comparação com estado inicial de consumo energético do edifício. Nas condições iniciais, o resultado da análise de ciclo de vida do edifício a 50 anos no que respeita ao consumo energético e respetivas emissões de CO2 na fase de operação foi de 6 MWh/m2 e 1,62 t/m2, respetivamente. Com aplicação de medidas propostas de otimização, o consumo e as respetivas emissões de CO2 foram reduzidas para 5,2 MWh/m2 e 1,37 t/m2 respetivamente. Embora se tenha conseguido reduzir ao consumo com as medidas propostas de otimização de energia, chegou-se à conclusão que o sistema fotovoltaico dimensionado para fornecer energia ao edifício não consegue satisfazer as necessidades energéticas do edifício no final dos 50 anos.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present the operational matrices of the left Caputo fractional derivative, right Caputo fractional derivative and Riemann–Liouville fractional integral for shifted Legendre polynomials. We develop an accurate numerical algorithm to solve the two-sided space–time fractional advection–dispersion equation (FADE) based on a spectral shifted Legendre tau (SLT) method in combination with the derived shifted Legendre operational matrices. The fractional derivatives are described in the Caputo sense. We propose a spectral SLT method, both in temporal and spatial discretizations for the two-sided space–time FADE. This technique reduces the two-sided space–time FADE to a system of algebraic equations that simplifies the problem. Numerical results carried out to confirm the spectral accuracy and efficiency of the proposed algorithm. By selecting relatively few Legendre polynomial degrees, we are able to get very accurate approximations, demonstrating the utility of the new approach over other numerical methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação de Mestrado em MPA – Administração Pública

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The division problem consists of allocating an amount M of a perfectly divisible good among a group of n agents. Sprumont (1991) showed that if agents have single-peaked preferences over their shares, the uniform rule is the unique strategy-proof, efficient, and anonymous rule. Ching and Serizawa (1998) extended this result by showing that the set of single-plateaued preferences is the largest domain, for all possible values of M, admitting a rule (the extended uniform rule) satisfying strategy-proofness, efficiency and symmetry. We identify, for each M and n, a maximal domain of preferences under which the extended uniform rule also satisfies the properties of strategy-proofness, efficiency, continuity, and "tops-onlyness". These domains (called weakly single-plateaued) are strictly larger than the set of single-plateaued preferences. However, their intersection, when M varies from zero to infinity, coincides with the set of single-plateaued preferences.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: There is no recommendation to screen ferritin level in blood donors, even though several studies have noted the high prevalence of iron deficiency after blood donation, particularly among menstruating females. Furthermore, some clinical trials have shown that non-anaemic women with unexplained fatigue may benefit from iron supplementation. Our objective is to determine the clinical effect of iron supplementation on fatigue in female blood donors without anaemia, but with a mean serum ferritin </= 30 ng/ml. METHODS/DESIGN: In a double blind randomised controlled trial, we will measure blood count and ferritin level of women under age 50 yr, who donate blood to the University Hospital of Lausanne Blood Transfusion Department, at the time of the donation and after 1 week. One hundred and forty donors with a ferritin level </= 30 ng/ml and haemoglobin level >/= 120 g/l (non-anaemic) a week after the donation will be included in the study and randomised. A one-month course of oral ferrous sulphate (80 mg/day of elemental iron) will be introduced vs. placebo. Self-reported fatigue will be measured using a visual analogue scale. Secondary outcomes are: score of fatigue (Fatigue Severity Scale), maximal aerobic power (Chester Step Test), quality of life (SF-12), and mood disorders (Prime-MD). Haemoglobin and ferritin concentration will be monitored before and after the intervention. DISCUSSION: Iron deficiency is a potential problem for all blood donors, especially menstruating women. To our knowledge, no other intervention study has yet evaluated the impact of iron supplementation on subjective symptoms after a blood donation. TRIAL REGISTRATION: NCT00689793.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this work was to evaluate a dot-enzyme-linked immunosorbent assay (dot-ELISA) using excretory-secretory antigens from the larval stages of Toxocara canis for the diagnosis of toxocariasis. A secondary aim was to establish the optimal conditions for its use in an area with a high prevalence of human T. canis infection. The dot-ELISA test was standardised using different concentrations of the antigen fixed on nitrocellulose paper strips and increasing dilutions of the serum and conjugate. Both the dot-ELISA and standard ELISA methods were tested in parallel with the same batch of sera from controls and from individuals living in the problem area. The best results were obtained with 1.33 µg/mL of antigen, dilutions of 1/80 for the samples and controls and a dilution of 1/5,000 for the anti-human IgG-peroxidase conjugate. All steps of the procedure were performed at room temperature. The coincidence between ELISA and dot-ELISA was 85% and the kappa index was 0.72. The dot-ELISA test described here is rapid, easy to perform and does not require expensive equipment. Thus, this test is suitable for the serological diagnosis of human T. canis infection in field surveys and in the primary health care centres of endemic regions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.