928 resultados para PRACTICAL APPLICATIONS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La prévention primaire des maladies cardiovasculaires par les médecins s'effectue par une prise en charge individualisée des facteurs de risque. L'indication à un traitement par statines se base sur une estimation du risque de survenue d'une maladie cardiovasculaire et sur le taux de LDL-cholestérol. Trois scores de risque sont couramment utilisés: le score PROCAM, le score Framingham, et le SCORE européen. En Suisse, le Groupe Suisse Lipides et Athérosclérose (GSLA) recommande en première instance l'utilisation du score PROCAM avec une adaptation du niveau de risque pour la Suisse. Une enquête a aussi montré que c'est le score le plus utilisé en Suisse. Dans cet article, les particularités de ces scores et leurs applications pratiques en ce qui concerne la prescription de statines en prévention primaire sont discutées. Les conséquences et les bénéfices potentiels de l'application de ces scores en Suisse sont également abordés. [Abstract] Primary prevention of cardiovascular disease by physicians is achieved by management of individual risk factors. The eligibility for treatment with statins is based on both an estimate of the risk of developing cardiovascular disease and the LDL-cholesterol. Three risk scores are commonly used : the PROCAM score, the Framingham score, and the European score. In Switzerland, the Swiss Group Lipids and Atherosclerosis (GSLA) recommends to use the PROCAM score with an adjustment of the level of risk for Switzerland. A survey also showed that PROCAM is the most used in Switzerland. In this article, the differences of these scores and their practical applications regarding the prescription of statins in primary prevention are discussed. The consequences and potential benefits of applying these scores in Switzerland are also discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Triheptanoin-enriched diets have been successfully used in the experimental treatment of various metabolic disorders. Maximal therapeutic effect is achieved in the context of a ketogenic diet where triheptanoin oil provides 3040% of the daily caloric intake. However, pre-clinical studies using triheptanoin-rich diets are hindered by the difficulty of administering to laboratory animals as a solid foodstuff. In the present study, we successfully synthesized triheptanoin to the highest standards of purity from glycerol and heptanoic acid, using sulfonated charcoal as a catalyst. Triheptanoin oil was then formulated as a solid, stable and palatable preparation using a ketogenic base and a combination of four commercially available formulation agents: hydrophilic fumed silica, hydrophobic fumed silica, microcrystalline cellulose, and talc. Diet compliance and safety was tested on C57Bl/6 mice over a 15-week period, comparing overall status and body weight change. Practical applications: This work provides a complete description of (i) an efficient and cost-effective synthesis of triheptanoin and (ii) its formulation as a solid, stable, and palatable ketogenic diet (triheptanoin-rich; 39% of the caloric intake) for rodents. Triheptanoin-rich diets will be helpful on pre-clinical experiments testing the therapeutic efficacy of triheptanoin in different rodent models of human diseases. In addition, using the same solidification procedure, other oils could be incorporated into rodent ketogenic diet to study their dosage and long-term effects on mammal health and development. This approach could be extremely valuable as ketogenic diet is widely used clinically for epilepsy treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Triheptanoin-enriched diets have been successfully used in the experimental treatment of various metabolic disorders. Maximal therapeutic effect is achieved in the context of a ketogenic diet where triheptanoin oil provides 3040% of the daily caloric intake. However, pre-clinical studies using triheptanoin-rich diets are hindered by the difficulty of administering to laboratory animals as a solid foodstuff. In the present study, we successfully synthesized triheptanoin to the highest standards of purity from glycerol and heptanoic acid, using sulfonated charcoal as a catalyst. Triheptanoin oil was then formulated as a solid, stable and palatable preparation using a ketogenic base and a combination of four commercially available formulation agents: hydrophilic fumed silica, hydrophobic fumed silica, microcrystalline cellulose, and talc. Diet compliance and safety was tested on C57Bl/6 mice over a 15-week period, comparing overall status and body weight change. Practical applications: This work provides a complete description of (i) an efficient and cost-effective synthesis of triheptanoin and (ii) its formulation as a solid, stable, and palatable ketogenic diet (triheptanoin-rich; 39% of the caloric intake) for rodents. Triheptanoin-rich diets will be helpful on pre-clinical experiments testing the therapeutic efficacy of triheptanoin in different rodent models of human diseases. In addition, using the same solidification procedure, other oils could be incorporated into rodent ketogenic diet to study their dosage and long-term effects on mammal health and development. This approach could be extremely valuable as ketogenic diet is widely used clinically for epilepsy treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Triheptanoin-enriched diets have been successfully used in the experimental treatment of various metabolic disorders. Maximal therapeutic effect is achieved in the context of a ketogenic diet where triheptanoin oil provides 3040% of the daily caloric intake. However, pre-clinical studies using triheptanoin-rich diets are hindered by the difficulty of administering to laboratory animals as a solid foodstuff. In the present study, we successfully synthesized triheptanoin to the highest standards of purity from glycerol and heptanoic acid, using sulfonated charcoal as a catalyst. Triheptanoin oil was then formulated as a solid, stable and palatable preparation using a ketogenic base and a combination of four commercially available formulation agents: hydrophilic fumed silica, hydrophobic fumed silica, microcrystalline cellulose, and talc. Diet compliance and safety was tested on C57Bl/6 mice over a 15-week period, comparing overall status and body weight change. Practical applications: This work provides a complete description of (i) an efficient and cost-effective synthesis of triheptanoin and (ii) its formulation as a solid, stable, and palatable ketogenic diet (triheptanoin-rich; 39% of the caloric intake) for rodents. Triheptanoin-rich diets will be helpful on pre-clinical experiments testing the therapeutic efficacy of triheptanoin in different rodent models of human diseases. In addition, using the same solidification procedure, other oils could be incorporated into rodent ketogenic diet to study their dosage and long-term effects on mammal health and development. This approach could be extremely valuable as ketogenic diet is widely used clinically for epilepsy treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the major practical applications of carcinoembryonic antigen (CEA) assay is the monitoring of colorectal carcinoma patients after complete tumor resection. During the last 5 years, we have followed by repeated CEA assays 66 patients with histologically confirmed colon or rectum adenocarcinoma. Among 19 patients who developed a tumor recurrence, 17 had increased CEA levels preceding the clinical diagnosis by 2 to 26 months. Among the 47 patients who did not show any clinical evidence of tumor recurrence, 35 had CEA values remaining below the limit of 5 ng/ml, whereas 12 had moderate elevations of CEA level fluctuating around this limit. The majority of patients in this last group were heavy smokers or had liver enlargement, but in a few of them we did not find a satisfactory explanation for their moderately increased CEA levels. While our results confirm that repeated CEA assays can predict tumor recurrence with a lead time of several months over clinical diagnosis, they also give a word of warning concerning the interpretation of moderate elevations of CEA level. A moderate increase of CEA level can be the result of early distant metastases, local recurrence or exacerbation of an inflammatory disease. We feel that the decision of second look operations based on CEA results should be made only if increasing CEA values have been observed on three different blood samples taken within a period of 3 months and if no nonmalignant diseases known to increase CEA level are present. Ultimately only randomized clinical studies will determine if second look operations motivated by elevated CEA levels can improve the quality and length of survival of patients with colorectal carcinoma.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study of transcriptional regulation often needs the integration of diverse yet independent data. In the present work, sequence conservation, predic-tion of transcription factor binding sites (TFBS) and gene expression analysis have been applied to the detection of putative transcription factor (TF) modules in the regulatory region of the FGFR3 oncogene. Several TFs with conserved binding sites in the FGFR3 regulatory region have shown high positive or negative corre-lation with FGFR3 expression both in urothelial carcinoma and in benign nevi. By means of conserved TF cluster analysis, two different TF modules have been iden-tified in the promoter and first intron of FGFR3 gene. These modules contain acti-vating AP2, E2F, E47 and SP1 binding sites plus motifs for EGR with possible repressor function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used, and Principal Component Analysis (PCA) is applied in order to study which is the best number of components for the classification task, implemented by means of a Support Vector Machine (SVM) System. Obtained results are satisfactory, and compared with [4] our system improves the recognition success, diminishing the variance at the same time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

False identity documents represent a serious threat through their production and use in organized crime and by terrorist organizations. The present-day fight against this criminal problem and threats to national security does not appropriately address the organized nature of this criminal activity, treating each fraudulent document on its own during investigation and the judicial process, which causes linkage blindness and restrains the analysis capacity. Given the drawbacks of this case-by-case approach, this article proposes an original model in which false identity documents are used to inform a systematic forensic intelligence process. The process aims to detect links, patterns, and tendencies among false identity documents in order to support strategic and tactical decision making, thus sustaining a proactive intelligence-led approach to fighting identity document fraud and the associated organized criminality. This article formalizes both the model and the process, using practical applications to illustrate its powerful capabilities. This model has a general application and can be transposed to other fields of forensic science facing similar difficulties.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sähkömoottorin jatkuva-aikainen kunnonvalvonta vaatii tiedonsiirtoa moottorilta ylemmälle tietojärjestelmätasolle, kuten taajuusmuuttajalle. Uusien kaapeleiden asennus on työlästä ja kallista. Moottorin ja taajuusmuuttajan välillä on kuitenkin aina kaapeli, jota käytetään moottorin tehonsyöttöön. Tätä kaapelia on mahdollista käyttää myös tiedonsiirtokanavana. Tässä diplomityössä käsitellään ja tutkitaan sähköverkkotiedonsiirtomenetelmää, jonka avulla voidaan muodostaa Ethernet-yhteys moottorilta taajuusmuuttajalle tai toisinpäin. Työssä kehitetään simulointimalli tiedonsiirtokanavan taajuusriippuvan vaimennuksen simuloimiseen. Lisäksi kehitetään kytkentärajapinta, joka mahdollistaa tiedonsiirron kolmivaiheisessa taajuusmuuttajan syöttämässä moottorikaapelissa. Työssä suunnitellaan ja toteutetaan tiedonsiirtoa varten pilot-laitteisto, jolla tehdään laboratoriotestit. Laboratoriotesteillä varmistetaan menetelmän soveltuvuus. Laboratoriotestien avulla etsitään myös reunaehdot tiedonsiirron toiminnalle ja tiedonsiirtonopeuteen vaikuttavia tekijöitä taajuusmuuttajakäytössä. Lisäksi tarkastellaan menetelmän aiheuttamaa viivettä, joka on tärkeäsäätösovellusten kannalta. Lopuksi arvioidaan menetelmän soveltuvuus käytännön sovelluksiin.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diplomityön tavoitteena oli tutkia miten ilman turbulenttisuus vaikuttaa tasaisesti liikkuvan rainan tilaan. Yhtenä sovelluskohteena teollisuudessa voidaan mainita esimerkiksi leiju-kuivain. Tiedetään, että konenopeuksien kasvu ja siitä johtuva ilmavirran nopeuden kasvu aiheuttaa voimavaikutuksia rainaan ja voi aiheuttaa lepatusta. Lepatus johtaa dynaamiseen epästabiilisuuteen, joka voidaan havaita, kun lineaarinen systeemi tulee epävakaaksi ja joh-taa epälineaariseen, rajoitettuun värähtelyyn. Lepatus huonontaa tuotteiden laatua ja voi johtaa ratakatkoihin. Työssä on esitetty tietoa ilman ja rainan vuorovaikutuksesta, jota hyödyntämällä voidaan kehittää yksinkertaistettu malli, jonka avulla liikkuvaa rainaa voidaan simuloida kuivaimes-sa. Kaasufaasin virtausyhtälöt on ratkaistu eri turbulenttimalleja käyttäen. Myös viskoelas-tisen rainan muodonmuutosta on tarkasteltu. Koska rainalle ei ole kirjallisuudesta saatavilla tarkkoja fysikaalisia ja mekaanisia arvoja, näitä ominaisuuksia testattiin eri arvoilla, jotta rainan käyttäytymistä jännityksen alaisena voidaan tarkastella. Näiden ominaisuuksien tun-teminen on ensiarvoisen tärkeää määritettäessä rainan aeroviskoelastista käyttäytymistä. Virtaussimulointi on kallista ja aikaa vievää. Tämä tarkoittaa uusien tutkimusmenetelmien omaksumista. Tässä työssä vaihtoehtoisena lähestymistapana on esitetty yksinkertaistettu malli, joka sisältää ilman ja rainan vuorovaikutusta kuvaavat ominaisuudet. Mallin avulla saadaan tietoa epälineaarisuuden ja turbulenssin vaikutuksesta sekä monimutkaisesta yh-teydestä stabiilisuuden ja ulkoisesti aikaansaadun värähtelyn sekä itse aiheutetun värähtelyn välillä. Työn lopussa on esitetty havainnollinen esimerkki, jolla voidaan kuvata olosuhteita, jossa rainan tasainen liike muuttuu epävakaaksi. Kun turbulenttisuudesta johtuva painevaih-telu ylittää tietyn rajan, rainan värähtely kasvaa muuttuen satunnaisesta järjestäytyneeksi. Saaduttulokset osoittavat, että turbulenttisuudella on suuri vaikutus eikä sitä voi jättää huomioimatta. Myös rainan viskoelastiset ominaisuudet tulee huomioida, jotta rainan käyt-täytymistä voidaan kuvata tarkasti.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virranmittausantureita tarvitaan monenlaisissa käyttökohteissa, joissa ne mittaavat sekä virran suuruuttaettä laatua ja toimivat osana niiden säätelyjärjestelmää. Virranmittausantureita tarvitaan myös vikatilanteiden määrittämiseen erilaisissa suojauspiireissä. Taajuusmuuttajissa virranmittaus on hyvin tärkeää ja suurista virroista sekä taajuuksista johtuen se täytyy suunnitella huolella. Tässä diplomityössä käsitellään ja tutkitaan eri virranmittausmenetelmiä, joiden avulla taajuusmuuttajan luotettava virranmittaus voidaan toteuttaa. Työssä tutkitaan eri menetelmiä virranmittauksen toteuttamiseksi, minkä jälkeen niistä valitaan sopiva menetelmä ja tutkitaan sen eri toteutusvaihtoehtoja. Sopivan toteutusvaihtoehdon valinnan jälkeen työssä suunnitellaan oma virranmittausanturi, joka sopii nimenomaisesti taajuusmuuttajakäyttöön. Suunnitellun anturin ominaisuuksia tutkitaan lopuksi simuloimalla, jonka jälkeen arvioidaan sen soveltumista käytännön sovelluksiin sekä arvioidaan erilaisia keinoja sen parantamiseksi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work was carried out in the laboratory of Fluid Dynamics, at Lappeenranta University of Technology during the years 1991-1996. The research was a part of larger high speed technology development research. First, there was the idea of making high speed machinery applications with the Brayton cycle. There was a clear need to deepen theknowledge of the cycle itself and to make a new approach in the field of the research. Also, the removal of water from the humid air seemed very interesting. The goal of this work was to study methods of designing high speed machinery to the reversed Brayton cycle, from theoretical principles to practical applications. The reversed Brayton cycle can be employed as an air dryer, a heat pump or a refrigerating machine. In this research the use of humid air as a working fluid has an environmental advantage, as well. A new calculation method for the Braytoncycle is developed. In this method especially the expansion process in the turbine is important because of the condensation of the water vapour in the humid air. This physical phenomena can have significant effects on the level of performance of the application. Also, the influence of calculating the process with actual, achievable process equipment efficiencies is essential for the development of the future machinery. The above theoretical calculations are confirmed with two different laboratory prototypes. The high speed machinery concept allows one to build an application with only one rotating shaft including all the major parts: the high speed motor, the compressor and the turbine wheel. The use of oil free bearings and high rotational speed outlines give several advantages compared to conventional machineries: light weight, compact structure, safe operation andhigher efficiency at a large operational region. There are always problems whentheory is applied to practice. The calibrations of pressure, temperature and humidity probes were made with care but still measurable errors were not negligible. Several different separators were examined and in all cases the content of the separated water was not exact. Due to the compact sizes and structures of the prototypes, the process measurement was slightly difficult. The experimental results agree well with the theoretical calculations. These experiments prove the operation of the process and lay a ground for the further development. The results of this work give very promising possibilities for the design of new, commercially competitive applications that use high speed machinery and the reversed Brayton cycle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.