907 resultados para computationally efficient algorithm


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents a program for simulations of vehicle-track and vehicle-trackstructure dynamic interaction . The method used is computationally efficient in the sense that a reduced number of coordinates is sufficient and doesn’t require high efficiency computers. The method proposes a modal substructuring approach of the system by modelling rails , sleepers and underlying structure with modal coordinates, the vehicle with physical lumped elements coordinates and by introducing interconnection elements between these structures (wheel-rail contact, railpads and ballast) by means of their interaction forces. The Frequency response function (FRF) is also calculated for both cases of track over a structure (a bridge, a viaduct ...) and for the simple vehicle-track program; for each case the vehicle effect on the FRF is then analyzed through the comparison of the FRFs obtained introducing or not a simplified vehicle on the system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Photovoltaic (PV) conversion is the direct production of electrical energy from sun without involving the emission of polluting substances. In order to be competitive with other energy sources, cost of the PV technology must be reduced ensuring adequate conversion efficiencies. These goals have motivated the interest of researchers in investigating advanced designs of crystalline silicon solar (c-Si) cells. Since lowering the cost of PV devices involves the reduction of the volume of semiconductor, an effective light trapping strategy aimed at increasing the photon absorption is required. Modeling of solar cells by electro-optical numerical simulation is helpful to predict the performance of future generations devices exhibiting advanced light-trapping schemes and to provide new and more specific guidelines to industry. The approaches to optical simulation commonly adopted for c-Si solar cells may lead to inaccurate results in case of thin film and nano-stuctured solar cells. On the other hand, rigorous solvers of Maxwell equations are really cpu- and memory-intensive. Recently, in optical simulation of solar cells, the RCWA method has gained relevance, providing a good trade-off between accuracy and computational resources requirement. This thesis is a contribution to the numerical simulation of advanced silicon solar cells by means of a state-of-the-art numerical 2-D/3-D device simulator, that has been successfully applied to the simulation of selective emitter and the rear point contact solar cells, for which the multi-dimensionality of the transport model is required in order to properly account for all physical competing mechanisms. In the second part of the thesis, the optical problems is discussed. Two novel and computationally efficient RCWA implementations for 2-D simulation domains as well as a third RCWA for 3-D structures based on an eigenvalues calculation approach have been presented. The proposed simulators have been validated in terms of accuracy, numerical convergence, computation time and correctness of results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Präsentiert wird ein vollständiger, exakter und effizienter Algorithmus zur Berechnung des Nachbarschaftsgraphen eines Arrangements von Quadriken (Algebraische Flächen vom Grad 2). Dies ist ein wichtiger Schritt auf dem Weg zur Berechnung des vollen 3D Arrangements. Dabei greifen wir auf eine bereits existierende Implementierung zur Berechnung der exakten Parametrisierung der Schnittkurve von zwei Quadriken zurück. Somit ist es möglich, die exakten Parameterwerte der Schnittpunkte zu bestimmen, diese entlang der Kurven zu sortieren und den Nachbarschaftsgraphen zu berechnen. Wir bezeichnen unsere Implementierung als vollständig, da sie auch die Behandlung aller Sonderfälle wie singulärer oder tangentialer Schnittpunkte einschließt. Sie ist exakt, da immer das mathematisch korrekte Ergebnis berechnet wird. Und schließlich bezeichnen wir unsere Implementierung als effizient, da sie im Vergleich mit dem einzigen bisher implementierten Ansatz gut abschneidet. Implementiert wurde unser Ansatz im Rahmen des Projektes EXACUS. Das zentrale Ziel von EXACUS ist es, einen Prototypen eines zuverlässigen und leistungsfähigen CAD Geometriekerns zu entwickeln. Obwohl wir das Design unserer Bibliothek als prototypisch bezeichnen, legen wir dennoch größten Wert auf Vollständigkeit, Exaktheit, Effizienz, Dokumentation und Wiederverwendbarkeit. Über den eigentlich Beitrag zu EXACUS hinaus, hatte der hier vorgestellte Ansatz durch seine besonderen Anforderungen auch wesentlichen Einfluss auf grundlegende Teile von EXACUS. Im Besonderen hat diese Arbeit zur generischen Unterstützung der Zahlentypen und der Verwendung modularer Methoden innerhalb von EXACUS beigetragen. Im Rahmen der derzeitigen Integration von EXACUS in CGAL wurden diese Teile bereits erfolgreich in ausgereifte CGAL Pakete weiterentwickelt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this thesis is the study of techniques for efficient management and use of the spectrum based on cognitive radio technology. The ability of cognitive radio technologies to adapt to the real-time conditions of its operating environment, offers the potential for more flexible use of the available spectrum. In this context, the international interest is particularly focused on the “white spaces” in the UHF band of digital terrestrial television. Spectrum sensing and geo-location database have been considered in order to obtain information on the electromagnetic environment. Different methodologies have been considered in order to investigate spectral resources potentially available for the white space devices in the TV band. The adopted methodologies are based on the geo-location database approach used either in autonomous operation or in combination with sensing techniques. A novel and computationally efficient methodology for the calculation of the maximum permitted white space device EIRP is then proposed. The methodology is suitable for implementation in TV white space databases. Different Italian scenarios are analyzed in order to identify both the available spectrum and the white space device emission limits. Finally two different applications of cognitive radio technology are considered. The first considered application is the emergency management. The attention is focused on the consideration of both cognitive and autonomic networking approaches when deploying an emergency management system. The cognitive technology is then considered in applications related to satellite systems. In particular a hybrid cognitive satellite-terrestrial is introduced and an analysis of coexistence between terrestrial and satellite networks by considering a cognitive approach is performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Changepoint analysis is a well established area of statistical research, but in the context of spatio-temporal point processes it is as yet relatively unexplored. Some substantial differences with regard to standard changepoint analysis have to be taken into account: firstly, at every time point the datum is an irregular pattern of points; secondly, in real situations issues of spatial dependence between points and temporal dependence within time segments raise. Our motivating example consists of data concerning the monitoring and recovery of radioactive particles from Sandside beach, North of Scotland; there have been two major changes in the equipment used to detect the particles, representing known potential changepoints in the number of retrieved particles. In addition, offshore particle retrieval campaigns are believed may reduce the particle intensity onshore with an unknown temporal lag; in this latter case, the problem concerns multiple unknown changepoints. We therefore propose a Bayesian approach for detecting multiple changepoints in the intensity function of a spatio-temporal point process, allowing for spatial and temporal dependence within segments. We use Log-Gaussian Cox Processes, a very flexible class of models suitable for environmental applications that can be implemented using integrated nested Laplace approximation (INLA), a computationally efficient alternative to Monte Carlo Markov Chain methods for approximating the posterior distribution of the parameters. Once the posterior curve is obtained, we propose a few methods for detecting significant change points. We present a simulation study, which consists in generating spatio-temporal point pattern series under several scenarios; the performance of the methods is assessed in terms of type I and II errors, detected changepoint locations and accuracy of the segment intensity estimates. We finally apply the above methods to the motivating dataset and find good and sensible results about the presence and quality of changes in the process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Natürliche hydraulische Bruchbildung ist in allen Bereichen der Erdkruste ein wichtiger und stark verbreiteter Prozess. Sie beeinflusst die effektive Permeabilität und Fluidtransport auf mehreren Größenordnungen, indem sie hydraulische Konnektivität bewirkt. Der Prozess der Bruchbildung ist sowohl sehr dynamisch als auch hoch komplex. Die Dynamik stammt von der starken Wechselwirkung tektonischer und hydraulischer Prozesse, während sich die Komplexität aus der potentiellen Abhängigkeit der poroelastischen Eigenschaften von Fluiddruck und Bruchbildung ergibt. Die Bildung hydraulischer Brüche besteht aus drei Phasen: 1) Nukleation, 2) zeitabhängiges quasi-statisches Wachstum so lange der Fluiddruck die Zugfestigkeit des Gesteins übersteigt, und 3) in heterogenen Gesteinen der Einfluss von Lagen unterschiedlicher mechanischer oder sedimentärer Eigenschaften auf die Bruchausbreitung. Auch die mechanische Heterogenität, die durch präexistierende Brüche und Gesteinsdeformation erzeugt wird, hat großen Einfluß auf den Wachstumsverlauf. Die Richtung der Bruchausbreitung wird entweder durch die Verbindung von Diskontinuitäten mit geringer Zugfestigkeit im Bereich vor der Bruchfront bestimmt, oder die Bruchausbreitung kann enden, wenn der Bruch auf Diskontinuitäten mit hoher Festigkeit trifft. Durch diese Wechselwirkungen entsteht ein Kluftnetzwerk mit komplexer Geometrie, das die lokale Deformationsgeschichte und die Dynamik der unterliegenden physikalischen Prozesse reflektiert. rnrnNatürliche hydraulische Bruchbildung hat wesentliche Implikationen für akademische und kommerzielle Fragestellungen in verschiedenen Feldern der Geowissenschaften. Seit den 50er Jahren wird hydraulisches Fracturing eingesetzt, um die Permeabilität von Gas und Öllagerstätten zu erhöhen. Geländebeobachtungen, Isotopenstudien, Laborexperimente und numerische Analysen bestätigen die entscheidende Rolle des Fluiddruckgefälles in Verbindung mit poroelastischen Effekten für den lokalen Spannungszustand und für die Bedingungen, unter denen sich hydraulische Brüche bilden und ausbreiten. Die meisten numerischen hydromechanischen Modelle nehmen für die Kopplung zwischen Fluid und propagierenden Brüchen vordefinierte Bruchgeometrien mit konstantem Fluiddruck an, um das Problem rechnerisch eingrenzen zu können. Da natürliche Gesteine kaum so einfach strukturiert sind, sind diese Modelle generell nicht sonderlich effektiv in der Analyse dieses komplexen Prozesses. Insbesondere unterschätzen sie die Rückkopplung von poroelastischen Effekten und gekoppelte Fluid-Festgestein Prozesse, d.h. die Entwicklung des Porendrucks in Abhängigkeit vom Gesteinsversagen und umgekehrt.rnrnIn dieser Arbeit wird ein zweidimensionales gekoppeltes poro-elasto-plastisches Computer-Model für die qualitative und zum Teil auch quantitativ Analyse der Rolle lokalisierter oder homogen verteilter Fluiddrücke auf die dynamische Ausbreitung von hydraulischen Brüchen und die zeitgleiche Evolution der effektiven Permeabilität entwickelt. Das Programm ist rechnerisch effizient, indem es die Fluiddynamik mittels einer Druckdiffusions-Gleichung nach Darcy ohne redundante Komponenten beschreibt. Es berücksichtigt auch die Biot-Kompressibilität poröser Gesteine, die implementiert wurde um die Kontrollparameter in der Mechanik hydraulischer Bruchbildung in verschiedenen geologischen Szenarien mit homogenen und heterogenen Sedimentären Abfolgen zu bestimmen. Als Resultat ergibt sich, dass der Fluiddruck-Gradient in geschlossenen Systemen lokal zu Störungen des homogenen Spannungsfeldes führen. Abhängig von den Randbedingungen können sich diese Störungen eine Neuausrichtung der Bruchausbreitung zur Folge haben kann. Durch den Effekt auf den lokalen Spannungszustand können hohe Druckgradienten auch schichtparallele Bruchbildung oder Schlupf in nicht-entwässerten heterogenen Medien erzeugen. Ein Beispiel von besonderer Bedeutung ist die Evolution von Akkretionskeilen, wo die große Dynamik der tektonischen Aktivität zusammen mit extremen Porendrücken lokal starke Störungen des Spannungsfeldes erzeugt, die eine hoch-komplexe strukturelle Entwicklung inklusive vertikaler und horizontaler hydraulischer Bruch-Netzwerke bewirkt. Die Transport-Eigenschaften der Gesteine werden stark durch die Dynamik in der Entwicklung lokaler Permeabilitäten durch Dehnungsbrüche und Störungen bestimmt. Möglicherweise besteht ein enger Zusammenhang zwischen der Bildung von Grabenstrukturen und großmaßstäblicher Fluid-Migration. rnrnDie Konsistenz zwischen den Resultaten der Simulationen und vorhergehender experimenteller Untersuchungen deutet darauf hin, dass das beschriebene numerische Verfahren zur qualitativen Analyse hydraulischer Brüche gut geeignet ist. Das Schema hat auch Nachteile wenn es um die quantitative Analyse des Fluidflusses durch induzierte Bruchflächen in deformierten Gesteinen geht. Es empfiehlt sich zudem, das vorgestellte numerische Schema um die Kopplung mit thermo-chemischen Prozessen zu erweitern, um dynamische Probleme im Zusammenhang mit dem Wachstum von Kluftfüllungen in hydraulischen Brüchen zu untersuchen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In multivariate time series analysis, the equal-time cross-correlation is a classic and computationally efficient measure for quantifying linear interrelations between data channels. When the cross-correlation coefficient is estimated using a finite amount of data points, its non-random part may be strongly contaminated by a sizable random contribution, such that no reliable conclusion can be drawn about genuine mutual interdependencies. The random correlations are determined by the signals' frequency content and the amount of data points used. Here, we introduce adjusted correlation matrices that can be employed to disentangle random from non-random contributions to each matrix element independently of the signal frequencies. Extending our previous work these matrices allow analyzing spatial patterns of genuine cross-correlation in multivariate data regardless of confounding influences. The performance is illustrated by example of model systems with known interdependence patterns. Finally, we apply the methods to electroencephalographic (EEG) data with epileptic seizure activity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a computationally efficient and biomechanically relevant soft-tissue simulation method for cranio-maxillofacial (CMF) surgery. A template-based facial muscle reconstruction was introduced to minimize the efforts on preparing a patient-specific model. A transversely isotropic mass-tensor model (MTM) was adopted to realize the effect of directional property of facial muscles in reasonable computation time. Additionally, sliding contact around teeth and mucosa was considered for more realistic simulation. Retrospective validation study with postoperative scan of a real patient showed that there were considerable improvements in simulation accuracy by incorporating template-based facial muscle anatomy and sliding contact.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical approaches to evaluate higher order SNP-SNP and SNP-environment interactions are critical in genetic association studies, as susceptibility to complex disease is likely to be related to the interaction of multiple SNPs and environmental factors. Logic regression (Kooperberg et al., 2001; Ruczinski et al., 2003) is one such approach, where interactions between SNPs and environmental variables are assessed in a regression framework, and interactions become part of the model search space. In this manuscript we extend the logic regression methodology, originally developed for cohort and case-control studies, for studies of trios with affected probands. Trio logic regression accounts for the linkage disequilibrium (LD) structure in the genotype data, and accommodates missing genotypes via haplotype-based imputation. We also derive an efficient algorithm to simulate case-parent trios where genetic risk is determined via epistatic interactions.