874 resultados para Filmic approach methods
Resumo:
Introduction: Open fractures of the leg represent a severe trauma. The combined approach, shared between plastic and orthopaedic surgeons, is considered to be important, although this multidisciplinary treatment is not routinely performed. Aim of this study was to verify whether the orthoplastic treatment is of any advantage over the traditional simply orthopedic treatment, through a multicentric inclusion of these unfrequent injuries into a prospective study. Material and methods: The following trauma centres were involved: Rizzoli Orthopaedic Institute/University of Bologna (leading centre) and Maggiore Hospital (Bologna, Italy), Frenchay Hospital (Bristol, United Kingdom), Jinnah Hospital (Lahore, Pakistan). All patients consecutively hospitalized in the mentioned centres between January 2012 and December 2013 due to tibial open fractures were included in the study and prospectively followed up to December 2014. Demographics and other clinical features were recorded, including the type of treatment (orthopaedic or orthoplastic). The considered outcome measures included duration of hospitalization, time for bone union and soft tissue closure, Enneking score at 3, 6 and 12 months, the incidence of osteomyelitis and other complications. Results: A total of 164 patients were included in the study. Out of them 68% were treated with an orthoplastic approach, whereas 32% received a purely orthopedic treatment. All considered outcome measures showed to be improved by the orthoplastic approach, compared to the orthopaedic one: time for soft tissue closure (2 versus 25 weeks), duration of hospital stay (22 versus 55 days), time for bone union (6 versus 8.5 months) , number of additional operations (0.6 versus 1.2) and functional recovery of the limb at 12 months (27 versus 19, Enneking’s score). All results were statistically significant. Conclusion: The combined orthoplastic approach to the treatment of open tibia fractures, in particular for high grade injuries (Gustilo 3B), is proven to improve the outcome of these severe injuries.
Resumo:
This thesis provides a thoroughly theoretical background in network theory and shows novel applications to real problems and data. In the first chapter a general introduction to network ensembles is given, and the relations with “standard” equilibrium statistical mechanics are described. Moreover, an entropy measure is considered to analyze statistical properties of the integrated PPI-signalling-mRNA expression networks in different cases. In the second chapter multilayer networks are introduced to evaluate and quantify the correlations between real interdependent networks. Multiplex networks describing citation-collaboration interactions and patterns in colorectal cancer are presented. The last chapter is completely dedicated to control theory and its relation with network theory. We characterise how the structural controllability of a network is affected by the fraction of low in-degree and low out-degree nodes. Finally, we present a novel approach to the controllability of multiplex networks
Resumo:
Small molecules affecting biological processes in plants are widely used in agricultural practice as herbicides or plant growth regulators and in basic plant sciences as probes to study the physiology of plants. Most of the compounds were identified in large screens by the agrochemical industry, as phytoactive natural products and more recently, novel phytoactive compounds originated from academic research by chemical screens performed to induce specific phenotypes of interest. The aim of the present PhD thesis is to evaluate different approaches used for the identification of the primary mode of action (MoA) of a phytoactive compound. Based on the methodologies used for MoA identification, three approaches are discerned: a phenotyping approach, an approach based on a genetic screen and a biochemical screening approach.rnFour scientific publications resulting from my work are presented as examples of how a phenotyping approach can successfully be applied to describe the plant MoA of different compounds in detail.rnI. A subgroup of cyanoacrylates has been discovered as plant growth inhibitors. A set of bioassays indicated a specific effect on cell division. Cytological investigations of the cell division process in plant cell cultures, studies of microtubule assembly with green fluorescent protein marker lines in vivo and cross resistant studies with Eleusine indica plants harbouring a mutation in alpha-tubulin, led to the description of alpha-tubulin as a target site of cyanoacrylates (Tresch et al., 2005).rnII. The MoA of the herbicide flamprop-m-methyl was not known so far. The studies described in Tresch et al. (2008) indicate a primary effect on cell division. Detailed studies unravelled a specific effect on mitotic microtubule figures, causing a block in cell division. In contrast to other inhibitors of microtubule rearrangement such as dinitroanilines, flamprop-m-methyl did not influence microtubule assembly in vitro. An influence of flamprop-m-methyl on a target within the cytoskeleton signalling network could be proposed (Tresch et al., 2008).rnIII. The herbicide endothall is a protein phosphatase inhibitor structurally related to the natural product cantharidin. Bioassay studies indicated a dominant effect on dark-growing cells that was unrelated to effects observed in the light. Cytological characterisation of the microtubule cytoskeleton in corn tissue and heterotrophic tobacco cells showed a specific effect of endothall on mitotic spindle formation and ultrastructure of the nucleus in combination with a decrease of the proliferation index. The observed effects are similar to those of other protein phosphatase inhibitors such as cantharidin and the structurally different okadaic acid. Additionally, the observed effects show similarities to knock-out lines of the TON1 pathway, a protein phosphatase-regulated signalling pathway. The data presented in Tresch et al. (2011) associate endothall’s known in vitro inhibition of protein phosphatases with in vivo-effects and suggest an interaction between endothall and the TON1 pathway.rnIV. Mefluidide as a plant growth regulator induces growth retardation and a specific phenotype indicating an inhibition of fatty acid biosynthesis. A test of the cuticle functionality suggested a defect in the biosynthesis of very-long-chain fatty acids (VLCFA) or waxes. Metabolic profiling studies showed similarities with different groups of VLCFA synthesis inhibitors. Detailed analyses of VLCFA composition in tissues of duckweed (Lemna paucicostata) indicated a specific inhibition of the known herbicide target 3 ketoacyl-CoA synthase (KCS). Inhibitor studies using a yeast expression system established for plant KCS proteins verified the potency of mefluidide as an inhibitor of plant KCS enzymes. It could be shown that the strength of inhibition varied for different KCS homologues. The Arabidopsis Cer6 protein, which induces a plant growth phenotype similar to mefluidide when knocked out, was one of the most sensitive KCS enzymes (Tresch et al., 2012).rnThe findings of my own work were combined with other publications reporting a successful identification of the MoA and primary target proteins of different compounds or compound classes.rnA revised three-tier approach for the MoA identification of phytoactive compounds is proposed. The approach consists of a 1st level aiming to address compound stability, uniformity of effects in different species, general cytotoxicity and the effect on common processes like transcription and translation. Based on these findings advanced studies can be defined to start the 2nd level of MoA characterisation, either with further phenotypic characterisation, starting a genetic screen or establishing a biochemical screen. At the 3rd level, enzyme assays or protein affinity studies should show the activity of the compound on the hypothesized target and should associate the in vitro effects with the in vivo profile of the compound.
Towards the 3D attenuation imaging of active volcanoes: methods and tests on real and simulated data
Resumo:
The purpose of my PhD thesis has been to face the issue of retrieving a three dimensional attenuation model in volcanic areas. To this purpose, I first elaborated a robust strategy for the analysis of seismic data. This was done by performing several synthetic tests to assess the applicability of spectral ratio method to our purposes. The results of the tests allowed us to conclude that: 1) spectral ratio method gives reliable differential attenuation (dt*) measurements in smooth velocity models; 2) short signal time window has to be chosen to perform spectral analysis; 3) the frequency range over which to compute spectral ratios greatly affects dt* measurements. Furthermore, a refined approach for the application of spectral ratio method has been developed and tested. Through this procedure, the effects caused by heterogeneities of propagation medium on the seismic signals may be removed. The tested data analysis technique was applied to the real active seismic SERAPIS database. It provided a dataset of dt* measurements which was used to obtain a three dimensional attenuation model of the shallowest part of Campi Flegrei caldera. Then, a linearized, iterative, damped attenuation tomography technique has been tested and applied to the selected dataset. The tomography, with a resolution of 0.5 km in the horizontal directions and 0.25 km in the vertical direction, allowed to image important features in the off-shore part of Campi Flegrei caldera. High QP bodies are immersed in a high attenuation body (Qp=30). The latter is well correlated with low Vp and high Vp/Vs values and it is interpreted as a saturated marine and volcanic sediments layer. High Qp anomalies, instead, are interpreted as the effects either of cooled lava bodies or of a CO2 reservoir. A pseudo-circular high Qp anomaly was detected and interpreted as the buried rim of NYT caldera.
Resumo:
In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".
Resumo:
This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.
Resumo:
Background External validity of study results is an important issue from a clinical point of view. From a methodological point of view, however, the concept of external validity is more complex than it seems to be at first glance. Methods Methodological review to address the concept of external validity. Results External validity refers to the question whether results are generalizable to persons other than the population in the original study. The only formal way to establish the external validity would be to repeat the study for that specific target population. We propose a three-way approach for assessing the external validity for specified target populations. (i) The study population might not be representative for the eligibility criteria that were intended. It should be addressed whether the study population differs from the intended source population with respect to characteristics that influence outcome. (ii) The target population will, by definition, differ from the study population with respect to geographical, temporal and ethnical conditions. Pondering external validity means asking the question whether these differences may influence study results. (iii) It should be assessed whether the study's conclusions can be generalized to target populations that do not meet all the eligibility criteria. Conclusion Judging the external validity of study results cannot be done by applying given eligibility criteria to a single target population. Rather, it is a complex reflection in which prior knowledge, statistical considerations, biological plausibility and eligibility criteria all have place.
Resumo:
When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.
Resumo:
Aim The strawberry poison frog, Oophaga pumilio, has undergone a remarkable radiation of colour morphs in the Bocas del Toro archipelago in Panama. This species shows extreme variation in colour and pattern between populations that have been geographically isolated for < 10,000 years. While previous research has suggested the involvement of divergent selection, to date no quantitative test has examined this hypothesis. Location Bocas del Toro archipelago, Panama. Methods We use a combination of population genetics, phylogeography and phenotypic analyses to test for divergent selection in coloration in O. pumilio. Tissue samples of 88 individuals from 15 distinct populations were collected. Using these data, we developed a gene tree using the mitochondrial DNA (mtDNA) d-loop region. Using parameters derived from our mtDNA phylogeny, we predicted the coalescence of a hypothetical nuclear gene underlying coloration. We collected spectral reflectance and body size measurements on 94 individuals from four of the populations and performed a quantitative analysis of phenotypic divergence. Results The mtDNA d-loop tree revealed considerable polyphyly across populations. Coalescent reconstructions of gene trees within population trees revealed incomplete genotypic sorting among populations. The quantitative analysis of phenotypic divergence revealed complete lineage sorting by colour, but not by body size: populations showed non-overlapping variation in spectral reflectance measures of body coloration, while variation in body size did not separate populations. Simulations of the coalescent using parameter values derived from our empirical analyses demonstrated that the level of sorting among populations seen in colour cannot reasonably be attributed to drift. Main conclusions These results imply that divergence in colour, but not body size, is occurring at a faster rate than expected under neutral processes. Our study provides the first quantitative support for the claim that strong diversifying selection underlies colour variation in the strawberry poison frog.
Resumo:
This publication offers concrete suggestions for implementing an integrative and learning-oriented approach to agricultural extension with the goal of fostering sustainable development. It targets governmental and non-governmental organisations, development agencies, and extension staff working in the field of rural development. The book looks into the conditions and trends that influence extension today, and outlines new challenges and necessary adaptations. It offers a basic reflection on the goals, the criteria for success and the form of a state-of-the-art approach to extension. The core of the book consists of a presentation of Learning for Sustainability (LforS), an example of an integrative, learning-oriented approach that is based on three crucial elements: stakeholder dialogue, knowledge management, and organizational development. Awareness raising and capacity building, social mobilization, and monitoring & evaluation are additional building blocks. The structure and organisation of the LforS approach as well as a selection of appropriate methods and tools are presented. The authors also address key aspects of developing and managing a learning-oriented extension approach. The book illustrates how LforS can be implemented by presenting two case studies, one from Madagascar and one from Mongolia. It addresses conceptual questions and at the same time it is practice-oriented. In contrast to other extension approaches, LforS does not limit its focus to production-related aspects and the development of value chains: it also addresses livelihood issues in a broad sense. With its focus on learning processes LforS seeks to create a better understanding of the links between different spheres and different levels of decision-making; it also seeks to foster integration of the different actors’ perspectives.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.