922 resultados para Power-to-Gas (P2G)
Resumo:
To identify novel quantitative trait loci (QTL) within horses, we performed genome-wide association studies (GWAS) based on sequence-level genotypes for conformation and performance traits in the Franches-Montagnes (FM) horse breed. Sequence-level genotypes of FM horses were derived by re-sequencing 30 key founders and imputing 50K data of genotyped horses. In total, we included 1077 FM horses genotyped for ~4 million SNPs and their respective de-regressed breeding values of the traits in the analysis. Based on this dataset, we identified a total of 14 QTL associated with 18 conformation traits and one performance trait. Therefore, our results suggest that the application of sequence-derived genotypes increases the power to identify novel QTL which were not identified previously based on 50K SNP chip data.
Resumo:
AIMS The Absorb bioresorbable vascular scaffold (Absorb BVS) provides similar clinical outcomes compared with a durable polymer-based everolimus-eluting metallic stent (EES) in stable coronary artery disease patients. ST-elevation myocardial infarction (STEMI) lesions have been associated with delayed arterial healing and impaired stent-related outcomes. The purpose of the present study is to compare directly the arterial healing response, angiographic efficacy and clinical outcomes between the Absorb BVS and metallic EES. METHODS AND RESULTS A total of 191 patients with acute STEMI were randomly allocated to treatment with the Absorb BVS or a metallic EES 1:1. The primary endpoint is the neointimal healing (NIH) score, which is calculated based on a score taking into consideration the presence of uncovered and malapposed stent struts, intraluminal filling defects and excessive neointimal proliferation, as detected by optical frequency domain imaging (OFDI) six months after the index procedure. The study will provide 90% power to show non-inferiority of the Absorb BVS compared with the EES. CONCLUSIONS This will be the first randomised study investigating the arterial healing response following implantation of the Absorb BVS compared with the EES. The healing response assessed by a novel NIH score in conjunction with results on angiographic efficacy parameters and device-oriented events will elucidate disease-specific applications of bioresorbable scaffolds.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
The holdout problem is commonly cited as the justification for eminent domain, but the nature of the problem is not well understood. This paper models the holdout problem in a bargaining framework, where a developer seeks to acquire several parcels of land for a large-scale development. We show that in the absence of eminent domain, holdouts are inevitable, threatening costly delay. However, if the developer has the power to use eminent domain to acquire the land from holdouts, all sellers will bargain, thus avoiding delay. An offsetting cost is that owners may negotiate prices below their true value, possibly resulting in excessive transfer of land to the developer.
Resumo:
Background. Various aspects of sustainability have taken root in the hospital environment; however, decisions to pursue sustainable practices within the framework of a master plan are not fully developed in National Cancer Institute (NCI) -designated cancer centers and subscribing institutions to the Practice Greenhealth (PGH) listserv.^ Methods. This cross sectional study was designed to identify the organizational characteristics each study group pursed to implement sustainability practices, describe the barriers they encountered and reasons behind their choices for undertaking certain sustainability practices. A web-based questionnaire was pilot tested, and then sent out to 64 NCI-designated cancer centers and 1638 subscribing institutions to the PGH listserv.^ Results. Complete responses were received from 39 NCI-designated cancer centers and 58 subscribing institutions to the PGH listserv. NCI-designated cancer centers reported greater progress in integrating sustainability criteria into design and construction projects than hospitals of institutions subscribing to the PHG listserv (p-value = <0.05). Statistically significant differences were also identified between these two study groups in undertaking work life options, conducting energy usage assessments, developing energy conservation and optimization plans, implementing solid waste and hazardous waste minimization programs, using energy efficient vehicles and reporting sustainability progress to external stakeholders. NCI-designated cancer centers were further along in implementing these programs (p-value = <0.05). In comparing the self-identified NCI-designated cancer centers to centers that indicated they were both and NCI and PGH, the later had made greater progress in using their collective buying power to pursue sustainable purchasing practices within the medical community (p-value = <0.05). In both study groups, recycling programs were well developed.^ Conclusions. Employee involvement was viewed as the most important reason for both study groups to pursue recycling initiatives and incorporated environmental criteria into purchasing decisions. A written sustainability commitment did not readily translate into a high percentage that had developed a sustainability master plan. Coordination of sustainability programs through a designated sustainability professional was not being undertaken by a large number of institutions within each study group. This may be due to the current economic downturn or management's attention to the emerging health care legislation being debated in congress. ^ Lifecycle assessments, an element of a carbon footprint, are seen as emerging areas of opportunity for health care institutions that can be used to evaluate the total lifecycle costs of products and services.^
Resumo:
Purpose. This project was designed to describe the association between wasting and CD4 cell counts in HIV-infected men in order to better understand the role of wasting in progression of HIV infection.^ Methods. Baseline and prevalence data were collected from a cross-sectional survey of 278 HIV-infected men seen at the Houston Veterans Affairs Medical Center Special Medicine Clinic, from June 1, 1991 to January 1, 1994. A follow-up study was conducted among those at risk, to investigate the incidence of wasting and the association between wasting and low CD4 cell counts. Wasting was described by four methods. Z-scores for age-, sex-, and height-adjusted weight; sex-, and age-adjusted mid-arm muscle circumference (MAMC); and fat-free mass; and the ratio of extra-cellular mass (ECM) to body-cell mass (BCM) $>$ 1.20. FFM, ECM, and BCM were estimated from bioelectrical impedance analysis. MAMC was calculated from triceps skinfold and mid-arm circumference. The relationship between wasting and covariates was examined with logistic regression in the cross-sectional study, and with Poisson regression in the follow-up study. The association between death and wasting was examined with Cox's regression.^ Results. The prevalence of wasting ranged from 5% (weight and ECM:BCM) to almost 14% (MAMC and FFM) among the 278 men examined. The odds of wasting, associated with baseline CD4 cell count $<$200, was significant for each method but weight, and ranged from 4.6 to 12.7. Use of antiviral therapy was significantly protective of MAMC, FFM and ECM:BCM (OR $\approx$ 0.2), whereas the need for antibacterial therapy was a risk (OR 3.1, 95% CI 1.1-8.7). The average incidence of wasting ranged from 4 to 16 per 100 person-years among the approximately 145 men followed for 160 person-years. Low CD4 cell count seemed to increase the risk of wasting, but statistical significance was not reached. The effect of the small sample size on the power to detect a significant association should be considered. Wasting, by MAMC and FFM, was significantly associated with death, after adjusting for baseline serum albumin concentration and CD4 cell count.^ Conclusions. Wasting by MAMC and FFM were strongly associated with baseline CD4 cell counts in both the prevalence and incidence study and strong predictors of death. Of the two methods, MAMC is convenient, has available reference population data, may be the most appropriate for assessing the nutritional status of HIV-infected men. ^
Resumo:
The tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) is an obvious carcinogen for lung cancer. Since CBMN (Cytokinesis-blocked micronucleus) has been found to be extremely sensitive to NNK-induced genetic damage, it is a potential important factor to predict the lung cancer risk. However, the association between lung cancer and NNK-induced genetic damage measured by CBMN assay has not been rigorously examined. ^ This research develops a methodology to model the chromosomal changes under NNK-induced genetic damage in a logistic regression framework in order to predict the occurrence of lung cancer. Since these chromosomal changes were usually not observed very long due to laboratory cost and time, a resampling technique was applied to generate the Markov chain of the normal and the damaged cell for each individual. A joint likelihood between the resampled Markov chains and the logistic regression model including transition probabilities of this chain as covariates was established. The Maximum likelihood estimation was applied to carry on the statistical test for comparison. The ability of this approach to increase discriminating power to predict lung cancer was compared to a baseline "non-genetic" model. ^ Our method offered an option to understand the association between the dynamic cell information and lung cancer. Our study indicated the extent of DNA damage/non-damage using the CBMN assay provides critical information that impacts public health studies of lung cancer risk. This novel statistical method could simultaneously estimate the process of DNA damage/non-damage and its relationship with lung cancer for each individual.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
Left ventricular outflow tract (LVOT) defects are an important group of congenital heart defects (CHDs) because of their associated mortality and long-term complications. LVOT defects include aortic valve stenosis (AVS), coarctation of aorta (CoA), and hypoplastic left heart syndrome (HLHS). Despite their clinical significance, their etiology is not completely understood. Even though the individual component phenotypes (AVS, CoA, and HLHS) may have different etiologies, they are often "lumped" together in epidemiological studies. Though "lumping" of component phenotypes may improve the power to detect associations, it may also lead to ambiguous findings if these defects are etiologically distinct. This is due to potential for effect heterogeneity across component phenotypes. ^ This study had two aims: (1) to identify the association between various risk factors and both the component (i.e., split) and composite (i.e., lumped) LVOT phenotypes, and (2) to assess the effect heterogeneity of risk factors across component phenotypes of LVOT defects. ^ This study was a secondary data analysis. Primary data were obtained from the Texas Birth Defect Registry (TBDR). TBDR uses an active surveillance method to ascertain birth defects in Texas. All cases of non complex LVOT defects which met our inclusion criteria during the period of 2002–2008 were included in the study. The comparison groups included all unaffected live births for the same period (2002–2008). Data from vital statistics were used to evaluate associations. Statistical associations between selected risk factors and LVOT defects was determined by calculating crude and adjusted prevalence ratio using Poisson regression analysis. Effect heterogeneity was evaluated using polytomous logistic regression. ^ There were a total of 2,353 cases of LVOT defects among 2,730,035 live births during the study period. There were a total of 1,311 definite cases of non-complex LVOT defects for analysis after excluding "complex" cardiac cases and cases associated with syndromes (n=168). Among infant characteristics, males were at a significantly higher risk of developing LVOT defects compared to females. Among maternal characteristics, significant associations were seen with maternal age > 40 years (compared to maternal age 20–24 years) and maternal residence in Texas-Mexico border (compared to non-border residence). Among birth characteristics, significant associations were seen with preterm birth and small for gestation age LVOT defects. ^ When evaluating effect heterogeneity, the following variables had significantly different effects among the component LVOT defect phenotypes: infant sex, plurality, maternal age, maternal race/ethnicity, and Texas-Mexico border residence. ^ This study found significant associations between various demographic factors and LVOT defects. While many findings from this study were consistent with results from previous studies, we also identified new factors associated with LVOT defects. Additionally, this study was the first to assess effect heterogeneity across LVOT defect component phenotypes. These findings contribute to a growing body of literature on characteristics associated with LVOT defects. ^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^
Gentrificación liderada por el Estado y empresarialismo urbano en la Ciudad Autónoma de Buenos Aires
Resumo:
A partir de la experiencia de la Ciudad Autónoma de Buenos Aires, el artículo reflexiona sobre la gentrificación como una estrategia de desarrollo urbano impulsada por diversos gobiernos locales de la región, en un contexto de difusión del empresarialismo urbano. En este marco, el Gobierno de la Ciudad Autónoma de Buenos Aires impulsó, desde 1990, un intenso proceso de transformación urbana en el área central y en los barrios del sudeste, generando un tipo particular de interrelación con el sector privado que promueve la gentrificación en barrios del sudeste de la ciudad. Como consecuencia, han emergido recurrentemente conflictos urbanos que se oponen al modelo de desarrollo urbano impulsado desde el GCABA y reivindican el derecho a la ciudad para las mayorías. No obstante, todos estos conflictos mantienen un elevado nivel de fragmentación y no han logrado hasta el momento articularse en un único movimiento social urbano, lo que va en detrimento de las capacidades para modificar la orientación de las políticas urbanas locales.
Resumo:
The isotopic compositions of dissolved CO2 and CH4 in sediments of the Nankai Trough indicate that CH4 is formed during early diagenesis by microbial reduction of CO2. At the shallowest sampled depths, the CO2 dissolved in the pore water is unusually enriched in 12C (d13C = -35.2 per mil), indicating contribution of CO2 from oxidation of CH4. The most intense microbiological activity appears to be confined to the uppermost 50 m of sediment, based on relative lack of change in the isotopic compositions below this depth. Gas hydrate probably is not present at these localities (Sites 582, 583) because of CH4 concentrations that are insufficient to saturate the pore water with respect to gas hydrate stability.
Resumo:
El desarrollo de esta investigación se conforma como la búsqueda de vinculaciones territoriales a escala urbana en la Ciudad de Córdoba frente a un proceso de carácter global, como es el de la reestructuración productiva. Este concreto real se aborda desde un caso, el de la pequeña industria metalmecánica, que por presencia histórica y peso en el producto bruto geográfico de la ciudad se constituye en uno de los sostenes de la economía urbana. El abordaje del proceso de reestructuración productiva se realiza desde el análisis de las prácticas de los agentes. Por un lado, las del gobierno como agente que regula las relaciones productivas a través de políticas públicas industriales en el marco de un régimen de acumulación capitalista, y por otro lado, las prácticas de las pequeñas industrias metalmecánicas como agentes que se desenvuelven en el procesos de producción en una dialéctica entre las tendencias de producción de reestructuración de escala global y un entorno inmediato de relaciones productivas con otros agentes, procesos de producción, marco regulatorio y políticas públicas. Esta dialéctica se materializa en el territorio y da lugar a una configuración territorial industrial a escala urbana, que no solo se explica por las prácticas de los agentes industriales sino además por el juego de otros agentes que producen territorio. En el período postconvertibilidad a partir del 2002, el crecimiento de las actividades productivas en su conjunto, imprimen una dinámica compleja de expansión urbana y de relaciones entre agentes de distintos sectores, con distintos intereses y poder de actuar en el campo. Aquí se avanza en comprender las dimensiones de la expansión urbana que inciden en la configuración territorial industrial. La configuración territorial como vínculo indisociable entre las prácticas de agentes (sistema de acciones) y la estructura productiva territorial (sistema de objetos) muta permanentemente en el contexto postconvertibilidad debido al intenso dinamismo que adquiere la actividad industrial, entre habitus construidos históricamente y nuevas prácticas, que se encuentran en tensión permanente. Este concreto real va conformando un campo socio económico que discurre entre viejas y nuevas prácticas y configura un territorio fragmentado y disociado. El trabajo parte de la consideración de las políticas públicas industriales y las prácticas específicas de los agentes de las pequeñas industrias para luego analizar a escala urbana la configuración territorial, en donde las dimensiones de la expansión urbana interactúan con la especificidad de las prácticas de los agentes analizados. El cambio paulatino en las políticas destinadas al sector industrial postconvertibilidad manifiesta una presencia cada vez más relevante de PyMEs. No obstante la inercia heredada del periodo anterior revela, en la gestión de las políticas industriales y las prácticas empresarias, un camino caracterizado por la desconfianza e incertidumbre de las pequeñas empresas