915 resultados para data treatment
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositional data. Nevertheless, the use of this methodology is only possible for those data sets without null values. Consequently, in those data sets where the zeros are present, a previous treatment becomes necessary. Last advances in the treatment of compositional zeros have been centered especially in the zeros of structural nature and in the rounded zeros. These tools do not contemplate the particular case of count compositional data sets with null values. In this work we deal with \count zeros" and we introduce a treatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichlet probability distribution as a prior and we estimate the posterior probabilities. Then we apply a multiplicative modi¯cation for the non-zero values. We present a case study where this new methodology is applied. Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
Purpose: To evaluate the evolution of clinical and functional outcomes of symptomatic discoid lateral meniscus treated arthroscopically over time and to investigate the relationship between associated intra-articular findings and outcomes. Methods: Of all patients treated arthroscopically between 1995 and 2010, patients treated for symptomatic discoid meniscus were identified in the hospital charts. Baseline data (demographics, previous trauma of ipsilateral knee, and associated intra-articular findings) and medium term outcome data from clinical follow-up examinations (pain, locking, snapping and instability of the operated knee) were extracted from clinical records. Telephone interviews were conducted at long term in 28 patients (31 knees). Interviews comprised clinical outcomes as well as functional outcomes as assessed by the International Knee Documentation Committee Subjective Knee Evaluation Form (IKDC). Results: All patients underwent arthroscopic partial meniscectomy. The mean follow-up time for data extracted from clinical records was 11 months (SD ± 12). A significant improvement was found for pain in 77% (p<0.001), locking in 13%, (p=0.045) and snapping in 39 % (p<0.005). The mean follow-up time of the telephone interview was 60 months (SD ± 43). Improvement from baseline was generally less after five years than after one year and functional outcomes of the IKDC indicated an abnormal function after surgery (IKDC mean= 84.5, SD ± 20). In some patients, 5 year-outcomes were even worse than their preoperative condition. Nonetheless, 74% of patients perceived their knee function as improved. Furthermore, better results were seen in patients without any associated intra-articular findings. Conclusions: Arthroscopical partial meniscectomy is an effective intervention to relieve symptoms in patients with discoid meniscus in the medium-term; however, results trend to deteriorate over time. A trend towards better outcome for patients with no associated intra-articular findings was observed.
Resumo:
Objectives To determine the effect of human papillomavirus (HPV) quadrivalent vaccine on the risk of developing subsequent disease after an excisional procedure for cervical intraepithelial neoplasia or diagnosis of genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Design Retrospective analysis of data from two international, double blind, placebo controlled, randomised efficacy trials of quadrivalent HPV vaccine (protocol 013 (FUTURE I) and protocol 015 (FUTURE II)). Setting Primary care centres and university or hospital associated health centres in 24 countries and territories around the world. Participants Among 17 622 women aged 15–26 years who underwent 1:1 randomisation to vaccine or placebo, 2054 received cervical surgery or were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Intervention Three doses of quadrivalent HPV vaccine or placebo at day 1, month 2, and month 6. Main outcome measures Incidence of HPV related disease from 60 days after treatment or diagnosis, expressed as the number of women with an end point per 100 person years at risk. Results A total of 587 vaccine and 763 placebo recipients underwent cervical surgery. The incidence of any subsequent HPV related disease was 6.6 and 12.2 in vaccine and placebo recipients respectively (46.2% reduction (95% confidence interval 22.5% to 63.2%) with vaccination). Vaccination was associated with a significant reduction in risk of any subsequent high grade disease of the cervix by 64.9% (20.1% to 86.3%). A total of 229 vaccine recipients and 475 placebo recipients were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia, and the incidence of any subsequent HPV related disease was 20.1 and 31.0 in vaccine and placebo recipients respectively (35.2% reduction (13.8% to 51.8%)). Conclusions Previous vaccination with quadrivalent HPV vaccine among women who had surgical treatment for HPV related disease significantly reduced the incidence of subsequent HPV related disease, including high grade disease.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
Actualment, la legislació ambiental ha esdevingut més restrictiva pel que fa a la descàrrega d'aigües residuals amb nutrients, especialment en les anomenades àrees sensibles o zones vulnerables. Arran d'aquest fet, s'ha estimulat el coneixement, desenvolupament i millora dels processos d'eliminació de nutrients. El Reactor Discontinu Seqüencial (RDS) o Sequencing Batch Reactor (SBR) en anglès, és un sistema de tractament de fangs actius que opera mitjançant un procediment d'omplerta-buidat. En aquest tipus de reactors, l'aigua residual és addicionada en un sol reactor que treballa per càrregues repetint un cicle (seqüència) al llarg del temps. Una de les característiques dels SBR és que totes les diferents operacions (omplerta, reacció, sedimentació i buidat) es donen en un mateix reactor. La tecnologia SBR no és nova d'ara. El fet, és que va aparèixer abans que els sistema de tractament continu de fangs actius. El precursor dels SBR va ser un sistema d'omplerta-buidat que operava en discontinu. Entre els anys 1914 i 1920, varen sorgir certes dificultats moltes d'elles a nivell d'operació (vàlvules, canvis el cabal d'un reactor a un altre, elevat temps d'atenció per l'operari...) per aquests reactors. Però no va ser fins a finals de la dècada dels '50 principis del '60, amb el desenvolupament de nous equipaments i noves tecnologies, quan va tornar a ressorgir l'interès pels SBRs. Importants millores en el camp del subministrament d'aire (vàlvules motoritzades o d'acció pneumàtica) i en el de control (sondes de nivell, mesuradors de cabal, temporitzadors automàtics, microprocessadors) han permès que avui en dia els SBRs competeixin amb els sistemes convencional de fangs actius. L'objectiu de la present tesi és la identificació de les condicions d'operació adequades per un cicle segons el tipus d'aigua residual a l'entrada, les necessitats del tractament i la qualitat desitjada de la sortida utilitzant la tecnologia SBR. Aquestes tres característiques, l'aigua a tractar, les necessitats del tractament i la qualitat final desitjada determinen en gran mesura el tractament a realitzar. Així doncs, per tal d'adequar el tractament a cada tipus d'aigua residual i les seves necessitats, han estat estudiats diferents estratègies d'alimentació. El seguiment del procés es realitza mitjançant mesures on-line de pH, OD i RedOx, els canvis de les quals donen informació sobre l'estat del procés. Alhora un altre paràmetre que es pot calcular a partir de l'oxigen dissolt és la OUR que és una dada complementària als paràmetres esmentats. S'han avaluat les condicions d'operació per eliminar nitrogen d'una aigua residual sintètica utilitzant una estratègia d'alimentació esglaonada, a través de l'estudi de l'efecte del nombre d'alimentacions, la definició de la llargada i el número de fases per cicle, i la identificació dels punts crítics seguint les sondes de pH, OD i RedOx. S'ha aplicat l'estratègia d'alimentació esglaonada a dues aigües residuals diferents: una procedent d'una indústria tèxtil i l'altra, dels lixiviats d'un abocador. En ambdues aigües residuals es va estudiar l'eficiència del procés a partir de les condicions d'operació i de la velocitat del consum d'oxigen. Mentre que en l'aigua residual tèxtil el principal objectiu era eliminar matèria orgànica, en l'aigua procedent dels lixiviats d'abocador era eliminar matèria orgànica i nitrogen. S'han avaluat les condicions d'operació per eliminar nitrogen i fòsfor d'una aigua residual urbana utilitzant una estratègia d'alimentació esglaonada, a través de la definició del número i la llargada de les fases per cicle, i la identificació dels punts crítics seguint les sondes de pH, OD i RedOx. S'ha analitzat la influència del pH i la font de carboni per tal d'eliminar fòsfor d'una aigua sintètica a partir de l'estudi de l'increment de pH a dos reactors amb diferents fonts de carboni i l'estudi de l'efecte de canviar la font de carboni. Tal i com es pot veure al llarg de la tesi, on s'han tractat diferents aigües residuals per a diferents necessitats, un dels avantatges més importants d'un SBR és la seva flexibilitat.
Resumo:
Data assimilation provides techniques for combining observations and prior model forecasts to create initial conditions for numerical weather prediction (NWP). The relative weighting assigned to each observation in the analysis is determined by its associated error. Remote sensing data usually has correlated errors, but the correlations are typically ignored in NWP. Here, we describe three approaches to the treatment of observation error correlations. For an idealized data set, the information content under each simplified assumption is compared with that under correct correlation specification. Treating the errors as uncorrelated results in a significant loss of information. However, retention of an approximated correlation gives clear benefits.
Resumo:
A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Background and Purpose-Clinical research into the treatment of acute stroke is complicated, is costly, and has often been unsuccessful. Developments in imaging technology based on computed tomography and magnetic resonance imaging scans offer opportunities for screening experimental therapies during phase II testing so as to deliver only the most promising interventions to phase III. We discuss the design and the appropriate sample size for phase II studies in stroke based on lesion volume. Methods-Determination of the relation between analyses of lesion volumes and of neurologic outcomes is illustrated using data from placebo trial patients from the Virtual International Stroke Trials Archive. The size of an effect on lesion volume that would lead to a clinically relevant treatment effect in terms of a measure, such as modified Rankin score (mRS), is found. The sample size to detect that magnitude of effect on lesion volume is then calculated. Simulation is used to evaluate different criteria for proceeding from phase II to phase III. Results-The odds ratios for mRS correspond roughly to the square root of odds ratios for lesion volume, implying that for equivalent power specifications, sample sizes based on lesion volumes should be about one fourth of those based on mRS. Relaxation of power requirements, appropriate for phase II, lead to further sample size reductions. For example, a phase III trial comparing a novel treatment with placebo with a total sample size of 1518 patients might be motivated from a phase II trial of 126 patients comparing the same 2 treatment arms. Discussion-Definitive phase III trials in stroke should aim to demonstrate significant effects of treatment on clinical outcomes. However, more direct outcomes such as lesion volume can be useful in phase II for determining whether such phase III trials should be undertaken in the first place. (Stroke. 2009;40:1347-1352.)
Resumo:
This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Cardiovascular disease represents a major clinical problem affecting a significant proportion of the world's population and remains the main cause of death in the UK. The majority of therapies currently available for the treatment of cardiovascular disease do not cure the problem but merely treat the symptoms. Furthermore, many cardioactive drugs have serious side effects and have narrow therapeutic windows that can limit their usefulness in the clinic. Thus, the development of more selective and highly effective therapeutic strategies that could cure specific cardiovascular diseases would be of enormous benefit both to the patient and to those countries where healthcare systems are responsible for an increasing number of patients. In this review, we discuss the evidence that suggests that targeting the cell cycle machinery in cardiovascular cells provides a novel strategy for the treatment of certain cardiovascular diseases. Those cell cycle molecules that are important for regulating terminal differentiation of cardiac myocytes and whether they can be targeted to reinitiate cell division and myocardial repair will be discussed as will the molecules that control vascular smooth muscle cell (VSMC) and endothelial cell proliferation in disorders such as atherosclerosis and restenosis. The main approaches currently used to target the cell cycle machinery in cardiovascular disease have employed gene therapy techniques. We will overview the different methods and routes of gene delivery to the cardiovascular system and describe possible future drug therapies for these disorders. Although the majority of the published data comes from animal studies, there are several instances where potential therapies have moved into the clinical setting with promising results.
Resumo:
The increase in CVD incidence following the menopause is associated with oestrogen loss. Dietary isoflavones are thought to be cardioprotective via their oestrogenic and oestrogen receptor-independent effects, but evidence to support this role is scarce. Individual variation in response to diet may be considerable and can obscure potential beneficial effects in a sample population; in particular, the response to isoflavone treatment may vary according to genotype and equol-production status. The effects of isoflavone supplementation (50hairspmg/d) on a range of established and novel biomarkers of CVD, including markers of lipid and glucose metabolism and inflammatory biomarkers, have been investigated in a placebo-controlled 2x8-week randomised cross-over study in 117 healthy post-menopausal women. Responsiveness to isoflavone supplementation according to (1) single nucleotide polymorphisms in a range of key CVD genes, including oestrogen receptor (ER) alpha and beta and (2) equol-production status has been examined. Isoflavones supplementation was found to have no effect on markers of lipids and glucose metabolism. Isoflavones improve C-reactive protein concentrations but do not affect other plasma inflammatory markers. There are no differences in response to isoflavones according to equol-production status. However, differences in HDL-cholesterol and vascular cell adhesion molecule 1 response to isoflavones v. placebo are evident with specific ER beta genotypes. In conclusion, isoflavones have beneficial effects on C-reactive protein, but not other cardiovascular risk markers. However, specific ER beta gene polymorphic subgroups may benefit from isoflavone supplementation.