922 resultados para Power-to-Gas (P2G)
Resumo:
The behavior of composed Web services depends on the results of the invoked services; unexpected behavior of one of the invoked services can threat the correct execution of an entire composition. This paper proposes an event-based approach to black-box testing of Web service compositions based on event sequence graphs, which are extended by facilities to deal not only with service behavior under regular circumstances (i.e., where cooperating services are working as expected) but also with their behavior in undesirable situations (i.e., where cooperating services are not working as expected). Furthermore, the approach can be used independently of artifacts (e.g., Business Process Execution Language) or type of composition (orchestration/choreography). A large case study, based on a commercial Web application, demonstrates the feasibility of the approach and analyzes its characteristics. Test generation and execution are supported by dedicated tools. Especially, the use of an enterprise service bus for test execution is noteworthy and differs from other approaches. The results of the case study encourage to suggest that the new approach has the power to detect faults systematically, performing properly even with complex and large compositions. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
The last decade has witnessed very fast development in microfabrication technologies. The increasing industrial applications of microfluidic systems call for more intensive and systematic knowledge on this newly emerging field. Especially for gaseous flow and heat transfer at microscale, the applicability of conventional theories developed at macro scale is not yet completely validated; this is mainly due to scarce experimental data available in literature for gas flows. The objective of this thesis is to investigate these unclear elements by analyzing forced convection for gaseous flows through microtubes and micro heat exchangers. Experimental tests have been performed with microtubes having various inner diameters, namely 750 m, 510 m and 170 m, over a wide range of Reynolds number covering the laminar region, the transitional zone and also the onset region of the turbulent regime. The results show that conventional theory is able to predict the flow friction factor when flow compressibility does not appear and the effect of fluid temperature-dependent properties is insignificant. A double-layered microchannel heat exchanger has been designed in order to study experimentally the efficiency of a gas-to-gas micro heat exchanger. This microdevice contains 133 parallel microchannels machined into polished PEEK plates for both the hot side and the cold side. The microchannels are 200 µm high, 200 µm wide and 39.8 mm long. The design of the micro device has been made in order to be able to test different materials as partition foil with flexible thickness. Experimental tests have been carried out for five different partition foils, with various mass flow rates and flow configurations. The experimental results indicate that the thermal performance of the countercurrent and cross flow micro heat exchanger can be strongly influenced by axial conduction in the partition foil separating the hot gas flow and cold gas flow.
Resumo:
The aim of this thesis is to develop a depth analysis of the inductive power transfer (or wireless power transfer, WPT) along a metamaterial composed of cells arranged in a planar configuration, in order to deliver power to a receiver sliding on them. In this way, the problem of the efficiency strongly affected by the weak coupling between emitter and receiver can be obviated, and the distance of transmission can significantly be increased. This study is made using a circuital approach and the magnetoinductive wave (MIW) theory, in order to simply explain the behavior of the transmission coefficient and efficiency from the circuital and experimental point of view. Moreover, flat spiral resonators are used as metamaterial cells, particularly indicated in literature for WPT metamaterials operating at MHz frequencies (5-30 MHz). Finally, this thesis presents a complete electrical characterization of multilayer and multiturn flat spiral resonators and, in particular, it proposes a new approach for the resistance calculation through finite element simulations, in order to consider all the high frequency parasitic effects. Multilayer and multiturn flat spiral resonators are studied in order to decrease the operating frequency down to kHz, maintaining small external dimensions and allowing the metamaterials to be supplied by electronic power converters (resonant inverters).
Resumo:
The aim of this work is to evaluate the emissions of the main pollutants of a pellet stove, by trying to simulate the real use in domestic operations. All the operating phases of this system were considered: ignition, partial load, increase in power, and nominal load. In each phase, quantity and type of some pollutants in emissions were determined: the main pollutant gases (CO, NOx, SO2, H2S and volatile organic compounds (VOCs)), total dust (PM) and its content of polycyclic aromatic hydrocarbons (PAHs), regulated heavy metals (Ni, Cd, As and Pb), main soluble ions and Total Carbon (TC). Results show that emission factors of TSP, CO, and of the main determined pollutants (TC, Cd and PAHs) are higher during ignition phase. In particular, this phase prevalently contributes to PAHs emissions. During increase in power phase, gas and particulate emissions do not appreciably differ from nominal load ones; nevertheless, PAH emission factors are higher than steady state ones, but lower than ignition phase. Moreover, during not-steady state phases, PAH mixture is more toxic than during steady state phases. In conclusion, this study allowed to go deeper in pellet stove environmental impact, by pointing out how the different operating conditions can modify the emissions. These are different from certificated data, which are based exclusively on measurements in steady state conditions.
Resumo:
What can we learn about the way that folk storytelling operates for tellers and audience members by examining the telling of stories by characters within such narratives? I examine Maithil women’s folktales in which stories of women’s suffering at the hands of other women are first suppressed and later overheard by men who have the power to alleviate such suffering. Maithil women are pitted against one another in their pursuit of security and resources in the context of patrilineal formations. The solidarities such women nonetheless form—in part through sharing stories and keeping each other’s secrets—serve to mitigate their suffering and maintain a counter-system of ideational patterns and practices.
Resumo:
There are numerous statistical methods for quantitative trait linkage analysis in human studies. An ideal such method would have high power to detect genetic loci contributing to the trait, would be robust to non-normality in the phenotype distribution, would be appropriate for general pedigrees, would allow the incorporation of environmental covariates, and would be appropriate in the presence of selective sampling. We recently described a general framework for quantitative trait linkage analysis, based on generalized estimating equations, for which many current methods are special cases. This procedure is appropriate for general pedigrees and easily accommodates environmental covariates. In this paper, we use computer simulations to investigate the power robustness of a variety of linkage test statistics built upon our general framework. We also propose two novel test statistics that take account of higher moments of the phenotype distribution, in order to accommodate non-normality. These new linkage tests are shown to have high power and to be robust to non-normality. While we have not yet examined the performance of our procedures in the context of selective sampling via computer simulations, the proposed tests satisfy all of the other qualities of an ideal quantitative trait linkage analysis method.
Resumo:
Multislice-computed tomography (MSCT) and magnetic resonance imaging (MRI) are increasingly used for forensic purposes. Based on broad experience in clinical neuroimaging, post-mortem MSCT and MRI were performed in 57 forensic cases with the goal to evaluate the radiological methods concerning their usability for forensic head and brain examination. An experienced clinical radiologist evaluated the imaging data. The results were compared to the autopsy findings that served as the gold standard with regard to common forensic neurotrauma findings such as skull fractures, soft tissue lesions of the scalp, various forms of intracranial hemorrhage or signs of increased brain pressure. The sensitivity of the imaging methods ranged from 100% (e.g., heat-induced alterations, intracranial gas) to zero (e.g., mediobasal impression marks as a sign of increased brain pressure, plaques jaunes). The agreement between MRI and CT was 69%. The radiological methods prevalently failed in the detection of lesions smaller than 3mm of size, whereas they were generally satisfactory concerning the evaluation of intracranial hemorrhage. Due to its advanced 2D and 3D post-processing possibilities, CT in particular possessed certain advantages in comparison with autopsy with regard to forensic reconstruction. MRI showed forensically relevant findings not seen during autopsy in several cases. The partly limited sensitivity of imaging that was observed in this retrospective study was based on several factors: besides general technical limitations it became apparent that clinical radiologists require a sound basic forensic background in order to detect specific signs. Focused teaching sessions will be essential to improve the outcome in future examinations. On the other hand, the autopsy protocols should be further standardized to allow an exact comparison of imaging and autopsy data. In consideration of these facts, MRI and CT have the power to play an important role in future forensic neuropathological examination.
Resumo:
This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.
Resumo:
This report is a PhD dissertation proposal to study the in-cylinder temperature and heat flux distributions within a gasoline turbocharged direct injection (GTDI) engine. Recent regulations requiring automotive manufacturers to increase the fuel efficiency of their vehicles has led to great technological achievements in internal combustion engines. These achievements have increased the power density of gasoline engines dramatically in the last two decades. Engine technologies such as variable valve timing (VVT), direct injection (DI), and turbocharging have significantly improved engine power-to-weight and power-to-displacement ratios. A popular trend for increasing vehicle fuel economy in recent years has been to downsize the engine and add VVT, DI, and turbocharging technologies so that a lighter more efficient engine can replace a larger, heavier one. With the added power density, thermal management of the engine becomes a more important issue. Engine components are being pushed to their temperature limits. Therefore it has become increasingly important to have a greater understanding of the parameters that affect in-cylinder temperatures and heat transfer. The proposed research will analyze the effects of engine speed, load, relative air-fuel ratio (AFR), and exhaust gas recirculation (EGR) on both in-cylinder and global temperature and heat transfer distributions. Additionally, the effect of knocking combustion and fuel spray impingement will be investigated. The proposed research will be conducted on a 3.5 L six cylinder GTDI engine. The research engine will be instrumented with a large number of sensors to measure in-cylinder temperatures and pressures, as well as, the temperature, pressure, and flow rates of energy streams into and out of the engine. One of the goals of this research is to create a model that will predict the energy distribution to the crankshaft, exhaust, and cooling system based on normalized values for engine speed, load, AFR, and EGR. The results could be used to aid in the engine design phase for turbocharger and cooling system sizing. Additionally, the data collected can be used for validation of engine simulation models, since in-cylinder temperature and heat flux data is not readily available in the literature..
Resumo:
The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.
Resumo:
The Opalinus Clay in Northern Switzerland has been identified as a potential host rock formation for the disposal of radioactive waste. Comprehensive understanding of gas transport processes through this low-permeability formation forms a key issue in the assessment of repository performance. Field investigations and laboratory experiments suggest an intrinsic permeability of the Opalinus Clay in the order of 10(-20) to 10(-21) m(2) and a moderate anisotropy ratio < 10. Porosity depends on clay content and burial depth; values of similar to 0.12 are reported for the region of interest. Porosimetry indicates that about 10-30 of voids can be classed as macropores, corresponding to an equivalent pore radius > 25 nm. The determined entry pressures are in the range of 0.4-10 MPa and exhibit a marked dependence on intrinsic permeability. Both in situ gas tests and gas permeameter tests on drillcores demonstrate that gas transport through the rock is accompanied by porewater displacement, suggesting that classical flow concepts of immiscible displacement in porous media can be applied when the gas entry pressure (i.e. capillary threshold pressure) is less than the minimum principal stress acting within the rock. Essentially, the pore space accessible to gas flow is restricted to the network of connected macropores, which implies a very low degree of desaturation of the rock during the gas imbibition process. At elevated gas pressures (i.e. when gas pressure approaches the level of total stress that acts on the rock body), evidence was seen for dilatancy controlled gas transport mechanisms. Further field experiments were aimed at creating extended tensile fractures with high fracture transmissivity (hydro- or gasfracs). The test results lead to the conclusion that gas fracturing can be largely ruled out as a risk for post-closure repository performance.
Resumo:
Engineers are confronted with the energy demand of active medical implants in patients with increasing life expectancy. Scavenging energy from the patient’s body is envisioned as an alternative to conventional power sources. Joining in this effort towards human-powered implants, we propose an innovative concept that combines the deformation of an artery resulting from the arterial pressure pulse with a transduction mechanism based on magneto-hydrodynamics. To overcome certain limitations of a preliminary analytical study on this topic, we demonstrate here a more accurate model of our generator by implementing a three-dimensional multiphysics finite element method (FEM) simulation combining solid mechanics, fluid mechanics, electric and magnetic fields as well as the corresponding couplings. This simulation is used to optimize the generator with respect to several design parameters. A first validation is obtained by comparing the results of the FEM simulation with those of the analytical approach adopted in our previous study. With an expected overall conversion efficiency of 20% and an average output power of 30 μW, our generator outperforms previous devices based on arterial wall deformation by more than two orders of magnitude. Most importantly, our generator provides sufficient power to supply a cardiac pacemaker.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
Complex diseases, such as cancer, are caused by various genetic and environmental factors, and their interactions. Joint analysis of these factors and their interactions would increase the power to detect risk factors but is statistically. Bayesian generalized linear models using student-t prior distributions on coefficients, is a novel method to simultaneously analyze genetic factors, environmental factors, and interactions. I performed simulation studies using three different disease models and demonstrated that the variable selection performance of Bayesian generalized linear models is comparable to that of Bayesian stochastic search variable selection, an improved method for variable selection when compared to standard methods. I further evaluated the variable selection performance of Bayesian generalized linear models using different numbers of candidate covariates and different sample sizes, and provided a guideline for required sample size to achieve a high power of variable selection using Bayesian generalize linear models, considering different scales of number of candidate covariates. ^ Polymorphisms in folate metabolism genes and nutritional factors have been previously associated with lung cancer risk. In this study, I simultaneously analyzed 115 tag SNPs in folate metabolism genes, 14 nutritional factors, and all possible genetic-nutritional interactions from 1239 lung cancer cases and 1692 controls using Bayesian generalized linear models stratified by never, former, and current smoking status. SNPs in MTRR were significantly associated with lung cancer risk across never, former, and current smokers. In never smokers, three SNPs in TYMS and three gene-nutrient interactions, including an interaction between SHMT1 and vitamin B12, an interaction between MTRR and total fat intake, and an interaction between MTR and alcohol use, were also identified as associated with lung cancer risk. These lung cancer risk factors are worthy of further investigation.^