965 resultados para Linear boundary value control problems
Resumo:
In this thesis the impact of R&D expenditures on firm market value and stock returns is examined. This is performed in a sample of European listed firms for the period 2000-2009. I apply different linear and GMM econometric estimations for testing the impact of R&D on market prices and construct country portfolios based on firms’ R&D expenditure to market capitalization ratio for studying the effect of R&D on stock returns. The results confirm that more innovative firms have a better market valuation,investors consider R&D as an asset that produces long-term benefits for corporations. The impact of R&D on firm value differs across countries. It is significantly modulated by the financial and legal environment where firms operate. Other firm and industry characteristics seem to play a determinant role when investors value R&D. First, only larger firms with lower financial leverage that operate in highly innovative sectors decide to disclose their R&D investment. Second, the markets assign a premium to small firms, which operate in hi-tech sectors compared to larger enterprises for low-tech industries. On the other hand, I provide empirical evidence indicating that generally highly R&D-intensive firms may enhance mispricing problems related to firm valuation. As R&D contributes to the estimation of future stock returns, portfolios that comprise high R&D-intensive stocks may earn significant excess returns compared to the less innovative after controlling for size and book-to-market risk. Further, the most innovative firms are generally more risky in terms of stock volatility but not systematically more risky than low-tech firms. Firms that operate in Continental Europe suffer more mispricing compared to Anglo-Saxon peers but the former are less volatile, other things being equal. The sectors where firms operate are determinant even for the impact of R&D on stock returns; this effect is much stronger in hi-tech industries.
Resumo:
The collapse of linear polyelectrolyte chains in a poor solvent: When does a collapsing polyelectrolyte collect its counter ions? The collapse of polyions in a poor solvent is a complex system and is an active research subject in the theoretical polyelectrolyte community. The complexity is due to the subtle interplay between hydrophobic effects, electrostatic interactions, entropy elasticity, intrinsic excluded volume as well as specific counter-ion and co-ion properties. Long range Coulomb forces can obscure single molecule properties. The here presented approach is to use just a small amount of screening salt in combination with a very high sample dilution in order to screen intermolecular interaction whereas keeping intramolecular interaction as much as possible (polyelectrolyte concentration cp ≤ 12 mg/L, salt concentration; Cs = 10^-5 mol/L). This is so far not described in literature. During collapse, the polyion is subject to a drastic change in size along with strong reduction of free counterions in solution. Therefore light scattering was utilized to obtain the size of the polyion whereas a conductivity setup was developed to monitor the proceeding of counterion collection by the polyion. Partially quaternized PVP’s below and above the Manning limit were investigated and compared to the collapse of their uncharged precursor. The collapses were induced by an isorefractive solvent/non-solvent mixture consisting of 1-propanol and 2-pentanone, with nearly constant dielectric constant. The solvent quality for the uncharged polyion could be quantified which, for the first time, allowed the experimental investigation of the effect of electrostatic interaction prior and during polyion collapse. Given that the Manning parameter M for QPVP4.3 is as low as lB / c = 0.6 (lB the Bjerrum length and c the mean contour distance between two charges), no counterion binding should occur. However the Walden product reduces with first addition of non solvent and accelerates when the structural collapse sets in. Since the dielectric constant of the solvent remains virtually constant during the chain collapse, the counterion binding is entirely caused by the reduction in the polyion chain dimension. The collapse is shifted to lower wns with higher degrees of quaternization as the samples QPVP20 and QPVP35 show (M = 2.8 respectively 4.9). The combination of light scattering and conductivity measurement revealed for the first time that polyion chains already collect their counter ions well above the theta-dimension when the dimensions start to shrink. Due to only small amounts of screening salt, strong electrostatic interactions bias dynamic as well as static light scattering measurements. An extended Zimm formula was derived to account for this interaction and to obtain the real chain dimensions. The effective degree of dissociation g could be obtained semi quantitatively using this extrapolated static in combination with conductivity measurements. One can conclude the expansion factor a and the effective degree of ionization of the polyion to be mutually dependent. In the good solvent regime g of QPVP4.3, QPVP20 and QPVP35 exhibited a decreasing value in the order 1 > g4.3 > g20 > g35. The low values of g for QPVP20 and QPVP35 are assumed to be responsible for the prior collapse of the higher quaternized samples. Collapse theory predicts dipole-dipole attraction to increase accordingly and even predicts a collapse in the good solvent regime. This could be exactly observed for the QPVP35 sample. The experimental results were compared to a newly developed theory of uniform spherical collapse induced by concomitant counterion binding developed by M. Muthukumar and A. Kundagrami. The theory agrees qualitatively with the location of the phase boundary as well as the trend of an increasing expansion with an increase of the degree of quaternization. However experimental determined g for the samples QPVP4.3, QPVP20 and QPVP35 decreases linearly with the degree of quaternization whereas this theory predicts an almost constant value.
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
To determine the local control and complication rates for children with papillary and/or macular retinoblastoma progressing after chemotherapy and undergoing stereotactic radiotherapy (SRT) with a micromultileaf collimator.
Resumo:
A small proportion of individuals with non-specific low back pain (NSLBP) develop persistent problems. Up to 80% of the total costs for NSLBP are owing to chronic NSLBP. Psychosocial factors have been described to be important in the transition from acute to chronic NSLBP. Guidelines recommend the use of the Acute Low Back Pain Screening Questionnaire (ALBPSQ) and the Örebro Musculoskeletal Pain Screening Questionnaire (ÖMPSQ) to identify individuals at risk of developing persistent problems, such as long-term absence of work, persistent restriction in function or persistent pain. These instruments can be used with a cutoff value, where patients with values above the threshold are further assessed with a more comprehensive examination.
Resumo:
This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.
Resumo:
For 126 days, 850 lb. steers were fed diets of corn, corn silage, and ground hay containing either 0%, 4%, or 8% wet distillers solubles obtained from an Iowa dry mill ethanol plant. Addition of distillers solubles resulted in a linear decrease in feed consumption. Gains were increased 3.2% and decreased 6.4% by feeding 4% and 8% distillers solubles, respectively. Compared to the control diet, feed required per pound of gain was reduced 5% by low levels of distillers solubles and 1.5% by high levels. Feeding distillers solubles had no effect on carcass measurements. It was concluded that wet distillers solubles has value as a feed for cattle and can replace a portion of corn grain and supplemental nitrogen in a corn-based finishing diet for beef cattle. The decreased performance of steers fed the 8% level suggests that there might be a maximum amount of wet distiller solubles that can be fed to finishing cattle.
Resumo:
Popularity of Online Social Networks has been recently overshadowed by the privacy problems they pose. Users are getting increasingly vigilant concerning information they disclose and are strongly opposing the use of their information for commercial purposes. Nevertheless, as long as the network is offered to users for free, providers have little choice but to generate revenue through personalized advertising to remain financially viable. Our study empirically investigates the ways out of this deadlock. Using conjoint analysis we find that privacy is indeed important for users. We identify three groups of users with different utility patterns: Unconcerned Socializers, Control-conscious Socializers and Privacy-concerned. Our results provide relevant insights into how network providers can capitalize on different user preferences by specifically addressing the needs of distinct groups in the form of various premium accounts. Overall, our study is the first attempt to assess the value of privacy in monetary terms in this context.
Resumo:
AIM To assess the prevalence of vascular dementia, mixed dementia and Alzheimer's disease in patients with atrial fibrillation, and to evaluate the accuracy of the Hachinski ischemic score for these subtypes of dementia. METHODS A nested case-control study was carried out. A total of 103 of 784 consecutive patients evaluated for cognitive status at the Ambulatory Geriatric Clinic had a diagnosis of atrial fibrillation. Controls without atrial fibrillation were randomly selected from the remaining 681 patients using a 1:2 matching for sex, age and education. RESULTS The prevalence of vascular dementia was twofold in patients with atrial fibrillation compared with controls (21.4% vs 10.7%, P = 0.024). Alzheimer's disease was also more frequent in the group with atrial fibrillation (12.6% vs 7.3%, P = 0.046), whereas mixed dementia had a similar distribution. The Hachinski ischemic score poorly discriminated between dementia subtypes, with misclassification rates between 46% (95% CI 28-66) and 70% (95% CI 55-83). In patients with atrial fibrillation, these rates ranged from 55% (95% CI 32-77) to 69% (95% CI 39-91%). In patients in whom the diagnosis of dementia was excluded, the Hachinski ischemic score suggested the presence of vascular dementia in 11% and mixed dementia in 30%. CONCLUSIONS Vascular dementia and Alzheimer's disease, but not mixed dementia, are more prevalent in patients with atrial fibrillation. The discriminative accuracy of the Hachinski ischemic score for dementia subtypes in atrial fibrillation is poor, with a significant proportion of misclassifications.
Resumo:
We introduce a new boundary layer formalism on the basis of which a class of exact solutions to the Navier–Stokes equations is derived. These solutions describe laminar boundary layer flows past a flat plate under the assumption of one homogeneous direction, such as the classical swept Hiemenz boundary layer (SHBL), the asymptotic suction boundary layer (ASBL) and the oblique impingement boundary layer. The linear stability of these new solutions is investigated, uncovering new results for the SHBL and the ASBL. Previously, each of these flows had been described with its own formalism and coordinate system, such that the solutions could not be transformed into each other. Using a new compound formalism, we are able to show that the ASBL is the physical limit of the SHBL with wall suction when the chordwise velocity component vanishes while the homogeneous sweep velocity is maintained. A corresponding non-dimensionalization is proposed, which allows conversion of the new Reynolds number definition to the classical ones. Linear stability analysis for the new class of solutions reveals a compound neutral surface which contains the classical neutral curves of the SHBL and the ASBL. It is shown that the linearly most unstable Görtler–Hämmerlin modes of the SHBL smoothly transform into Tollmien–Schlichting modes as the chordwise velocity vanishes. These results are useful for transition prediction of the attachment-line instability, especially concerning the use of suction to stabilize boundary layers of swept-wing aircraft.
Resumo:
This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.
Resumo:
Koopman et al. (2014) developed a method to consistently decompose gross exports in value-added terms that accommodate infinite repercussions of international and inter-sector transactions. This provides a better understanding of trade in value added in global value chains than does the conventional gross exports method, which is affected by double-counting problems. However, the new framework is based on monetary input--output (IO) tables and cannot distinguish prices from quantities; thus, it is unable to consider financial adjustments through the exchange market. In this paper, we propose a framework based on a physical IO system, characterized by its linear programming equivalent that can clarify the various complexities relevant to the existing indicators and is proved to be consistent with Koopman's results when the physical decompositions are evaluated in monetary terms. While international monetary tables are typically described in current U.S. dollars, the physical framework can elucidate the impact of price adjustments through the exchange market. An iterative procedure to calculate the exchange rates is proposed, and we also show that the physical framework is also convenient for considering indicators associated with greenhouse gas (GHG) emissions.