988 resultados para Langmuir binary models
Resumo:
In the past few decades detailed observations of radio and X-ray emission from massive binary systems revealed a whole new physics present in such systems. Both thermal and non-thermal components of this emission indicate that most of the radiation at these bands originates in shocks. O and B-type stars and WolfRayet (WR) stars present supersonic and massive winds that, when colliding, emit largely due to the freefree radiation. The non-thermal radio and X-ray emissions are due to synchrotron and inverse Compton processes, respectively. In this case, magnetic fields are expected to play an important role in the emission distribution. In the past few years the modelling of the freefree and synchrotron emissions from massive binary systems have been based on purely hydrodynamical simulations, and ad hoc assumptions regarding the distribution of magnetic energy and the field geometry. In this work we provide the first full magnetohydrodynamic numerical simulations of windwind collision in massive binary systems. We study the freefree emission characterizing its dependence on the stellar and orbital parameters. We also study self-consistently the evolution of the magnetic field at the shock region, obtaining also the synchrotron energy distribution integrated along different lines of sight. We show that the magnetic field in the shocks is larger than that obtained when the proportionality between B and the plasma density is assumed. Also, we show that the role of the synchrotron emission relative to the total radio emission has been underestimated.
Resumo:
Background: In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. Methods: We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. Results: For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib. Conclusions: From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib.
Resumo:
Aims. Our goal is to study the circumstellar environment associated with each component of the wide intermediate-mass pre-main sequence binary system PDS 144 using broadband polarimetry. Methods. We present near-infrared (NIR) linear polarimetric observations of PDS 144 gathered with the IAGPOL imaging polarimeter along with the CamIV infrared camera at the Observatorio do Pico dos Dias (OPD). In addition, we re-analyzed OPD archive optical polarization to separate the binary and estimate the interstellar polarization using foreground stars. Results. After discounting the interstellar component, we found that both stars of the binary system are intrinsically polarized. The polarization vectors at optical and NIR bands of both components are aligned with the local magnetic field and the jet axis. These findings indicate an interplay between the interstellar magnetic field and the formation of the binary system. We also found that the PDS 144N is less polarized than its southern companion in the optical. However, in the NIR PDS 144N is more polarized. Our polarization data can only be explained by high inclinations (i greater than or similar to 80 degrees) for the disks of both members. In particular, comparisons of our NIR data with young stellar objects disk models suggest predominantly small grains in the circumstellar environment of PDS 144N. In spite of the different grain types in each component, the infrared spectral indexes indicate a coeval system. We also found evidence of coplanarity between the disks.
Resumo:
Model diagnostics is an integral part of model determination and an important part of the model diagnostics is residual analysis. We adapt and implement residuals considered in the literature for the probit, logistic and skew-probit links under binary regression. New latent residuals for the skew-probit link are proposed here. We have detected the presence of outliers using the residuals proposed here for different models in a simulated dataset and a real medical dataset.
Resumo:
This study shows the incorporation of ibuprofen, an anti-inflammatory drug, in Langmuir monolayers as cell membrane models. Significant effects were observed for dipalmitoyl phosphatidyl choline (DPPC) monolayers with relevant changes in the elasticity of the monolayer. Dipalmitoyl phosphatidyl glycerol (DPPG) monolayers were affected by small concentrations of ibuprofen, from 1 to 5 mol%. For both types of monolayer, ibuprofen could penetrate into the hydrophobic part of the monolayer, which was confirmed with polarization-modulated infrared reflection–absorption spectroscopy (PM-IRRAS). Brewster angle microscopy (BAM) images showed that ibuprofen prevents the formation of large domains of DPPC. The pharmacological action should occur primarily with penetration of ibuprofen via electrically neutral phospholipid headgroups of the membrane.
Resumo:
Im Rahmen dieser Arbeit wurde am System Polyethylenoxid / Polypropylenoxid (PEO / PPO) der Einfluß von Copolymeren auf die Grenzflächenspannung Sigma von Homopolymerblends untersucht. Als Additive dienten Triblockcopolymere EO-block-PO-block-EO bzw. PO-block-EO-block-PO, Diblockcopolymere S-block-EO sowie statistische Copolymere EO-ran-PO. Die Additive wurden so ausgewählt, daß sich Paare von Additiven jeweils in genau einer Eigenschaft (Zusammensetzung, Kettenlänge, Blockanordnung) unterscheiden, in allen anderen Parametern jedoch vergleichbar sind. Die Grenzflächenspannung wurde experimentell mit Hilfe der Pendant-Drop-Methode in Abhängigkeit von der Temperatur ermittelt, wobei das Polymer mit der höheren Dichte, PEO, die Tropfenphase und PPO die Matrixphase bildet. Das Additiv wurde bei Messung der Grenzflächenspannung der ternären Systeme in unterschiedlichen Konzentrationen entweder einer oder beiden Homopolymerphasen zugegeben. Die Konzentrationsabhängigkeit von Sigma lässt sich sowohl mit dem Modell von Tang und Huang als auch mit einem Langmuir-analogen Ansatz gut beschreiben.Um den Zusammenhang zwischen sigma und dem Phasenverhalten zu untersuchen, wurden für einige der ternären Systeme Trübungskurven bei 100°C aufgenommen. Der Vergleich zwischen den Phasendiagrammen und den korrespondierenden Werten von sigma weist darauf hin, dass ein Additiv sigma gerade dann wirksam reduziert, wenn es einem Homopolymer zugefügt wird, mit dem es nur begrenzt verträglich ist, da dann die Triebkraft zur Anlagerung an der Grenzfläche besonders ausgeprägt ist. Das bereits bekannte Phänomen, wonach der Wert der Grenzflächenspannung davon abhängig sein kann, in welcher der Phasen das Additiv zu Beginn der Messung vorliegt, wurde ausführlich untersucht. Es wird angenommen, dass das System nicht in jedem Fall das thermodynamische Gleichgewicht erlangt und der beobachtete Effekt auf das Erreichen stationärer Zustände zurückzuführen ist. Dieses Verhalten kann mit einem Modell beschrieben werden, in welches das Viskositätsverhältnis der Homopolymere sowie der Verteilungskoeffizient des Copolymers zwischen den Homopolymerphasen eingehen. Aus Löslichkeitsparametern wurde der binäre Wechselwirkungsparameter Chi PEO/PPO = 0.18 abgeschätzt und mit diesem die theoretischen Werte für sigma zwischen PEO und PPO nach den Modellen von Roe bzw. Helfand und Tagami berechnet. Der Vergleich mit den experimentellen Daten des binären Systems zeigt, dass beide Ansätze sigma-Werte liefern, die in der Größenordnung der experimentellen Daten liegen, hierbei erweist sich der Ansatz von Roe als besonders geeignet. Die Temperaturabhängigkeit der Grenzflächenspannung wird jedoch durch beide Ansätze unzutreffend wiedergegeben. Mit dem Modell von Helfand und Tagami wurden eine Grenzflächendicke von 7.9 à und das Dichteprofil der Grenzfläche berechnet. Für die Copolymere EO92PO56EO92 und S9EO22 (die Indices geben die Zahl der Monomereinheiten an) können die Grenzflächenüberschusskonzentrationen, die kritische Mizellenkonzentration sowie der einem Additivmolekül an der Grenzschicht zur Verfügung stehende Platz bestimmt werden.Der Vergleich unterschiedlicher Copolymere hinsichtlich ihrer Fähigkeit, sigma wirkungsvoll herabzusetzen, zeigt, dass im Fall von Triblockcopolymeren die Anordnung der Blöcke gegenüber der Zusammensetzung eine untergeordnete Rolle spielt. Mit zunehmender Kettenlänge nimmt die Effektivität als Compatibilizer sowohl bei Blockcopolymeren als auch bei statistischen Copolymeren zu.
Resumo:
This work focused mainly on two aspects of kinetics of phase separation in binary mixtures. In the first part, we studied the interplay of hydrodynamics and the phase separation of binary mixtures. A considerably flat container (a laterally extended geometry), at an aspect ratio of 14:1 (diameter: height) was chosen, so that any hydrodynamic instabilities, if they arise, could be tracked. Two binary mixtures were studied. One was a mixture of methanol and hexane, doped with 5% ethanol, which phase separated under cooling. The second was a mixture of butoxyethanol and water, doped with 2% decane, which phase separated under heating. The dopants were added to bring down the phase transition temperature around room temperature.rnrnAlthough much work has been done already on classical hydrodynamic instabilities, not much has been done in the understanding of the coupling between phase separation and hydrodynamic instabilities. This work aimed at understanding the influence of phase separation in initiating any hydrodynamic instability, and also vice versa. Another aim was to understand the influence of the applied temperature protocol on the emergence of patterns characteristic to hydrodynamic instabilities. rnrnOn slowly cooling the system continuously, at specific cooling rates, patterns were observed in the first mixture, at the start of phase separation. They resembled the patterns observed in classical Rayleigh-Bénard instability, which arises when a liquid continuously is heated from below. To suppress this classical convection, the cooling setup was tuned such that the lower side of the sample always remained cooler by a few millikelvins, relative to the top. We found that the nature of patterns changed with different cooling rates, with stable patterns appearing for a specific cooling rate (1K/h). On the basis of the cooling protocol, we estimated a modified Rayleigh number for our system. We found that the estimated modified Rayleigh number is near the critical value for instability, for cooling rates between 0.5K/h and 1K/h. This is consistent with our experimental findings. rnrnThe origin of the patterns, in spite of the lower side being relatively colder with respect to the top, points to two possible reasons. 1) During phase separation droplets of either phases are formed, which releases a latent heat. Our microcalorimetry measurements show that the rise in temperature during the first phase separation is in the order of 10-20millikelvins, which in some cases is enough to reverse the applied temperature bias. Thus phase separation in itself initiates a hydrodynamic instability. 2) The second reason comes from the cooling protocol itself. The sample was cooled from above and below. At sufficiently high cooling rates, there are situations where the interior of the sample is relatively hotter than both top and bottom of the sample. This is sufficient to create an instability within the cell. Our experiments at higher cooling rates (5K/h and above) show complex patterns, which hints that there is enough convection even before phase separation occurs. Infact, theoretical work done by Dr.Hayase show that patterns could arise in a system without latent heat, with symmetrical cooling from top and bottom. The simulations also show that the patterns do not span the entire height of the sample cell. This is again consistent with the cell sizes measured in our experiment.rnrnThe second mixture also showed patterns at specific heating rates, when it was continuously heated inducing phase separation. In this case though, the sample was turbid for a long time until patterns appeared. A meniscus was most probably formed before the patterns emerged. We attribute the reason of patterns in this case to Marangoni convection, which is present in systems with an interface, where local differences in surface tension give rise to an instability. Our estimates for the Rayleigh number also show a significantly lower number than that's required for RB-type instability.rnrnIn the first part of the work, therefore, we identify two different kinds of hydrodynamic instabilities in two different mixtures. Both are observed during, or after the first phase separation. Our patterns compare with the classical convection patterns, but here the origins are from phase separation and the cooling protocol.rnrnIn the second part of the work, we focused on the kinetics of phase separation in a polymer solution (polystyrene and methylcyclohexane), which is cooled continuously far down into the two phase region. Oscillations in turbidity, denoting material exchange between the phases are seen. Three processes contribute to the phase separation: Nucleation of droplets, their growth and coalescence, and their subsequent sedimentation. Experiments in low molecular binary mixtures had led to models of oscillation [43] which considered sedimentation time scales much faster than the time scales of nucleation and growth. The size and shape of the sample therefore did not matter in such situations. The oscillations in turbidity were volume-dominated. The present work aimed at understanding the influence of sedimentation time scales for polymer mixtures. Three heights of the sample with same composition were studied side by side. We found that periods increased with the sample height, thus showing that sedimentation time determines the period of oscillations in the polymer solutions. We experimented with different cooling rates and different compositions of the mixture, and we found that periods are still determined by the sample height, and therefore by sedimentation time. rnrnWe also see that turbidity emerges in two ways; either from the interface, or throughout the sample. We suggest that oscillations starting from the interface are due to satellite droplets that are formed on droplet coalescence at the interface. These satellite droplets are then advected to the top of the sample, and they grow, coalesce and sediment. This type of an oscillation wouldn't require the system to pass the energy barrier required for homogenous nucleation throughout the sample. This mechanism would work best in sample where the droplets could be effectively advected throughout the sample. In our experiments, we see more interface dominated oscillations in the smaller cells and lower cooling rates, where droplet advection is favourable. In larger samples and higher cooling rates, we mostly see that the whole sample becomes turbid homogenously, which requires the system to pass the energy barrier for homogenous nucleation.rnrnOscillations, in principle, occur since the system needs to pass an energy barrier for nucleation. The height of the barrier decreases with increasing supersaturation, which in turn is from the temperature ramp applied. This gives rise to a period where the system is clear, in between the turbid periods. At certain specific cooling rates, the system can follow a path such that the start of a turbid period coincides with the vanishing of the last turbid period, thus eliminating the clear periods. This means suppressions of oscillations altogether. In fact we experimentally present a case where, at a certain cooling rate, oscillations indeed vanish. rnrnThus we find through this work that the kinetics of phase separation in polymer solution is different from that of a low molecular system; sedimentation time scales become relevant, and therefore so does the shape and size of the sample. The role of interface in initiating turbid periods also become much more prominent in this system compared to that in low molecular mixtures.rnrnIn summary, some fundamental properties in the kinetics of phase separation in binary mixtures were studied. While the first part of the work described the close interplay of the first phase separation with hydrodynamic instabilities, the second part investigated the nature and determining factors of oscillations, when the system was cooled deep into the two phase region. Both cases show how the geometry of the cell can affect the kinetics of phase separation. This study leads to further fundamental understandings of the factors contributing to the kinetics of phase separation, and to the understandings of what can be controlled and tuned in practical cases. rn
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
Suppose that we are interested in establishing simple, but reliable rules for predicting future t-year survivors via censored regression models. In this article, we present inference procedures for evaluating such binary classification rules based on various prediction precision measures quantified by the overall misclassification rate, sensitivity and specificity, and positive and negative predictive values. Specifically, under various working models we derive consistent estimators for the above measures via substitution and cross validation estimation procedures. Furthermore, we provide large sample approximations to the distributions of these nonsmooth estimators without assuming that the working model is correctly specified. Confidence intervals, for example, for the difference of the precision measures between two competing rules can then be constructed. All the proposals are illustrated with two real examples and their finite sample properties are evaluated via a simulation study.
Resumo:
Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
This paper reports a comparison of three modeling strategies for the analysis of hospital mortality in a sample of general medicine inpatients in a Department of Veterans Affairs medical center. Logistic regression, a Markov chain model, and longitudinal logistic regression were evaluated on predictive performance as measured by the c-index and on accuracy of expected numbers of deaths compared to observed. The logistic regression used patient information collected at admission; the Markov model was comprised of two absorbing states for discharge and death and three transient states reflecting increasing severity of illness as measured by laboratory data collected during the hospital stay; longitudinal regression employed Generalized Estimating Equations (GEE) to model covariance structure for the repeated binary outcome. Results showed that the logistic regression predicted hospital mortality as well as the alternative methods but was limited in scope of application. The Markov chain provides insights into how day to day changes of illness severity lead to discharge or death. The longitudinal logistic regression showed that increasing illness trajectory is associated with hospital mortality. The conclusion is reached that for standard applications in modeling hospital mortality, logistic regression is adequate, but for new challenges facing health services research today, alternative methods are equally predictive, practical, and can provide new insights. ^
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.