930 resultados para Modellazione 3D,Blender,Leap Motion,Leap Aided Modelling,NURBS,Computer Grafica


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente lavoro è motivato dal problema della constituzione di unità percettive a livello della corteccia visiva primaria V1. Si studia dettagliatamente il modello geometrico di Citti-Sarti con particolare attenzione alla modellazione di fenomeni di associazione visiva. Viene studiato nel dettaglio un modello di connettività. Il contributo originale risiede nell'adattamento del metodo delle diffusion maps, recentemente introdotto da Coifman e Lafon, alla geometria subriemanniana della corteccia visiva. Vengono utilizzati strumenti di teoria del potenziale, teoria spettrale, analisi armonica in gruppi di Lie per l'approssimazione delle autofunzioni dell'operatore del calore sul gruppo dei moti rigidi del piano. Le autofunzioni sono utilizzate per l'estrazione di unità percettive nello stimolo visivo. Sono presentate prove sperimentali e originali delle capacità performanti del metodo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the potentially irreversible impact of groundwater quality deterioration in the Ferrara coastal aquifer, answers concerning the assessment of the extent of the salinization problem, the understanding of the mechanisms governing salinization processes, and the sustainability of the current water resources management are urgent. In this light, the present thesis aims to achieve the following objectives: Characterization of the lowland coastal aquifer of Ferrara: hydrology, hydrochemistry and evolution of the system The importance of data acquisition techniques in saltwater intrusion monitoring Predicting salinization trends in the lowland coastal aquifer Ammonium occurrence in a salinized lowland coastal aquifer Trace elements mobility in a saline coastal aquifer

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new Artificial Neural Network (ANN) able to predict at once the main parameters representative of the wave-structure interaction processes, i.e. the wave overtopping discharge, the wave transmission coefficient and the wave reflection coefficient. The new ANN has been specifically developed in order to provide managers and scientists with a tool that can be efficiently used for design purposes. The development of this ANN started with the preparation of a new extended and homogeneous database that collects all the available tests reporting at least one of the three parameters, for a total amount of 16’165 data. The variety of structure types and wave attack conditions in the database includes smooth, rock and armour unit slopes, berm breakwaters, vertical walls, low crested structures, oblique wave attacks. Some of the existing ANNs were compared and improved, leading to the selection of a final ANN, whose architecture was optimized through an in-depth sensitivity analysis to the training parameters of the ANN. Each of the selected 15 input parameters represents a physical aspect of the wave-structure interaction process, describing the wave attack (wave steepness and obliquity, breaking and shoaling factors), the structure geometry (submergence, straight or non-straight slope, with or without berm or toe, presence or not of a crown wall), or the structure type (smooth or covered by an armour layer, with permeable or impermeable core). The advanced ANN here proposed provides accurate predictions for all the three parameters, and demonstrates to overcome the limits imposed by the traditional formulae and approach adopted so far by some of the existing ANNs. The possibility to adopt just one model to obtain a handy and accurate evaluation of the overall performance of a coastal or harbor structure represents the most important and exportable result of the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I tetti verdi rappresentano, sempre più frequentemente, una tecnologia idonea alla mitigazione alle problematiche connesse all’ urbanizzazione, tuttavia la conoscenza delle prestazioni dei GR estensivi in clima sub-Mediterraneo è ancora limitata. La presente ricerca è supportata da 15 mesi di analisi sperimentali su due GR situati presso la Scuola di Ingegneria di Bologna. Inizialmente vengono comparate, tra loro e rispetto a una superficie di riferimento (RR), le prestazioni idrologiche ed energetiche dei due GR, caratterizzati da vegetazione a Sedum (SR) e a erbe native perenni (NR). Entrambi riducono i volumi defluiti e le temperature superficiali. Il NR si dimostra migliore del SR sia in campo idrologico che termico, la fisiologia della vegetazione del NR determina l'apertura diurna degli stomi e conseguentemente una maggiore evapotraspirazione (ET). Successivamente si sono studiate la variazioni giornaliere di umidità nel substrato del SR riscontrando che la loro ampiezza è influenzata dalla temperatura, dall’umidità iniziale e dalla fase vegetativa. Queste sono state simulate mediante un modello idrologico basato sull'equazione di bilancio idrico e su due modelli convenzionali per la stima della ET potenziale combinati con una funzione di estrazione dell’ umidità dal suolo. Sono stati proposti dei coefficienti di correzione, ottenuti per calibrazione, per considerare le differenze tra la coltura di riferimento e le colture nei GR durante le fasi di crescita. Infine, con l’ausilio di un modello implementato in SWMM 5.1. 007 utilizzando il modulo Low Impact Development (LID) durante simulazioni in continuo (12 mesi) si sono valutate le prestazioni in termini di ritenzione dei plot SR e RR. Il modello, calibrato e validato, mostra di essere in grado di riprodurre in modo soddisfacente i volumi defluiti dai due plot. Il modello, a seguito di una dettagliata calibrazione, potrebbe supportare Ingegneri e Amministrazioni nella valutazioni dei vantaggi derivanti dall'utilizzo dei GR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questo elaborato verrà illustrato il processo di realizzazione degli scenari tridimensionali utilizzati nell’addestramento dei Controllori del Traffico Aereo di aeroporto. Verrà esposto nel dettaglio il processo di modellazione al computer di scenari compatibili con il sistema Adacel™ MaxSim® attualmente utilizzato nei simulatori di ENAV Academy; verranno inoltre analizzati e descritti i processi decisionali che si affrontano prima e durante le diverse fasi di realizzazione dello scenario. Nell’illustrare i processi e le metodologie si farà riferimento all’esperienza acquisita durante la collaborazione con il team di ENAV per la realizzazione di modelli e scenari tridimensionali. In particolare, si farà riferimento allo scenario dell’Aeroporto di Ciampino in una fase di addestramento resa necessaria dal delicato passaggio dalla gestione militare alla gestione civile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro svolto in questa tesi mira alla creazione di un modello 3D della Rocca di Forlimpopoli mediante l'uso dei programmi Maya e Unity 3D. La combinazione di queste due tecnologie permette all'utente di utilizzare il prodotto finale per visitare virtualmente la rocca, muovendosi a proprio piacimento all'interno del modello del monumento, esplorandone anche le porzioni normalmente non accessibili al pubblico, o visitabili solo in determinate occasioni.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’azoto è uno dei prodotti principali dell’industria chimica, utilizzato principalmente per assicurare un sicuro stoccaggio di composti infiammabili. Generatori con sistemi PSA sono spesso più economici della tradizionale distillazione criogenica. I processi PSA utilizzano una colonna a letto fisso, riempita con materiale adsorbente, che adsorbe selettivamente un componente da una miscela gassosa. L’ossigeno diffonde molto più velocemente dell'azoto nei pori di setacci molecolari carboniosi. Oltre ad un ottimo materiale adsorbente, anche il design è fondamentale per la performance di un processo PSA. La fase di adsorbimento è seguita da una fase di desorbimento. Il materiale adsorbente può essere quindi riutilizzato nel ciclo seguente. L’assenza di un simulatore di processo ha reso necessario l’uso di dati sperimentali per sviluppare nuovi processi. Un tale approccio è molto costoso e lungo. Una modellazione e simulazione matematica, che consideri tutti i fenomeni di trasporto, è richiesta per una migliore comprensione dell'adsorbente sia per l'ottimizzazione del processo. La dinamica della colonna richiede la soluzione di insiemi di PDE distribuite nel tempo e nello spazio. Questo lavoro è stato svolto presso l'Università di Scienze Applicate - Münster, Germania. Argomento di questa tesi è la modellazione e simulazione di un impianto PSA per la produzione di azoto con il simulatore di processo Aspen Adsorption con l’obiettivo di permettere in futuro ottimizzazioni di processo affidabili, attendibili ed economiche basate su computazioni numeriche. E' discussa l’ottimizzazione di parametri, dati cinetici, termodinamici e di equilibrio. Il modello è affidabile, rigoroso e risponde adeguatamente a diverse condizioni al contorno. Tuttavia non è ancora pienamente soddisfacente poiché manca una rappresentazione adeguata della cinetica ovvero dei fenomeni di trasporto di materia. La messa a punto del software permetterà in futuro di indagare velocemente nuove possibilità di operazione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bite mark analysis offers the opportunity to identify the biter based on the individual characteristics of the dentitions. Normally, the main focus is on analysing bite mark injuries on human bodies, but also, bite marks in food may play an important role in the forensic investigation of a crime. This study presents a comparison of simulated bite marks in different kinds of food with the dentitions of the presumed biter. Bite marks were produced by six adults in slices of buttered bread, apples, different kinds of Swiss chocolate and Swiss cheese. The time-lapse influence of the bite mark in food, under room temperature conditions, was also examined. For the documentation of the bite marks and the dentitions of the biters, 3D optical surface scanning technology was used. The comparison was performed using two different software packages: the ATOS modelling and analysing software and the 3D studio max animation software. The ATOS software enables an automatic computation of the deviation between the two meshes. In the present study, the bite marks and the dentitions were compared, as well as the meshes of each bite mark which were recorded in the different stages of time lapse. In the 3D studio max software, the act of biting was animated to compare the dentitions with the bite mark. The examined food recorded the individual characteristics of the dentitions very well. In all cases, the biter could be identified, and the dentitions of the other presumed biters could be excluded. The influence of the time lapse on the food depends on the kind of food and is shown on the diagrams. However, the identification of the biter could still be performed after a period of time, based on the recorded individual characteristics of the dentitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Besnoitia besnoiti is an apicomplexan parasite responsible for bovine besnoitiosis, a disease with a high prevalence in tropical and subtropical regions and re-emerging in Europe. Despite the great economical losses associated with besnoitiosis, this disease has been underestimated and poorly studied, and neither an effective therapy nor an efficacious vaccine is available. Protein disulfide isomerase (PDI) is an essential enzyme for the acquisition of the correct three-dimensional structure of proteins. Current evidence suggests that in Neosporacaninum and Toxoplasmagondii, which are closely related to B. besnoiti, PDI play an important role in host cell invasion, is a relevant target for the host immune response, and represents a promising drug target and/or vaccine candidate. In this work, we present the nucleotide sequence of the B. besnoiti PDI gene. BbPDI belongs to the thioredoxin-like superfamily (cluster 00388) and is included in the PDI_a family (cluster defined cd02961) and the PDI_a_PDI_a'_c subfamily (cd02995). A 3D theoretical model was built by comparative homology using Swiss-Model server, using as a template the crystallographic deduced model of Tapasin-ERp57 (PDB code 3F8U chain C). Analysis of the phylogenetic tree for PDI within the phylum apicomplexa reinforces the close relationship among B. besnoiti, N. caninum and T. gondii. When subjected to a PDI-assay based on the polymerisation of reduced insulin, recombinant BbPDI expressed in E. coli exhibited enzymatic activity, which was inhibited by bacitracin. Antiserum directed against recombinant BbPDI reacted with PDI in Western blots and by immunofluorescence with B. besnoiti tachyzoites and bradyzoites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oncological liver surgery and interventions aim for removal of tumor tissue while preserving a sufficient amount of functional tissue to ensure organ regeneration. This requires detailed understanding of the patient-specific internal organ anatomy (blood vessel system, bile ducts, tumor location). The introduction of computer support in the surgical process enhances anatomical orientation through patient-specific 3D visualization and enables precise reproduction of planned surgical strategies though stereotactic navigation technology. This article provides clinical background information on indications and techniques for the treatment of liver tumors, reviews the technological contributions addressing the problem of organ motion during navigated surgery on a deforming organ, and finally presents an overview of the clinical experience in computer-assisted liver surgery and interventions. The review concludes that several clinically applicable solutions for computer aided liver surgery are available and small-scale clinical trials have been performed. Further developments will be required more accurate and faster handling of organ deformation and large clinical studies will be required for demonstrating the benefits of computer aided liver surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.