973 resultados para Proves funcionals respiratòries
Resumo:
Due to multiple immune evasion mechanisms of cancer cells, novel therapy approaches are required to overcome the limitations of existing immunotherapies. Bispecific antibodies are potent anti-cancer drugs, which redirect effector T cells for specific tumor cell lysis, thus enabling the patient’s immune system to fight cancer cells. The antibody format used in this proof of concept study–bispecific ideal monoclonal antibodies termed BiMAB–is a tailor-made recombinant protein, which consists of two fused scFv antibodies recognizing different antigens. Both are arranged in tandem on a single peptide chain and the individual variable binding domains are separated by special non-immunogenic linkers. The format is comprised of a scFv targeting CLDN18.2–a gastric cancer tumor associated antigen (TAA) –while the second specificity binds the CD3 epsilon (CD3ε) subunit of the T cell receptor (TCR) on T cells. For the first time, we compared in our IMAB362-based BiMAB setting, four different anti-CD3-scFvs, respectively derived from the mAbs TR66, CLB-T3, as well as the humanized and the murine variant of UCHT1. In addition, we investigated the impact of an N- versus a C-terminal location of the IMAB362-derived scFv and the anti-CD3-scFvs. Thus, nine CLDN18.2 specific BiMAB proteins were generated, of which all showed a remarkably high cytotoxicity towards CLDN18.2-positive tumor cells. Because of its promising effectiveness, 1BiMAB emerged as the BiMAB prototype. The selectivity of 1BiMAB for its TAA and CD3ε, with affinities in the nanomolar range, has been confirmed by in vitro assays. Its dual binding depends on the design of an N-terminally positioned IMAB362 scFv and the consecutive C-terminally positioned TR66 scFv. 1BiMAB provoked a concentration and target cell dependent T cell activation, proliferation, and upregulation of the cytolytic protein Granzyme B, as well as the consequent elimination of target cells. Our results demonstrate that 1BiMAB is able to activate T cells independent of elements that are usually involved in the T cell recognition program, like antigen presentation, MHC restriction, and co-stimulatory effector molecules. In the first in vivo studies using a subcutaneous xenogeneic tumor mouse model in immune incompetent NSG mice, we could prove a significant therapeutic effect of 1BiMAB with partial or complete tumor elimination. The initial in vitro RIBOMAB experiments correspondingly showed encouraging results. The electroporation of 1BiMAB IVT-RNA into target or effector cells was feasible, while the functionality of translated 1BiMAB was proven by induced T cell activation and target cell lysis. Accordingly, we could show that the in vitro RIBOMAB approach was applicable for all nine BiMABs, which proves the RIBOMAB concept. Thus, the CLDN18.2-BiMAB strategy offers great potential for the treatment of cancer. In the future, administered either as protein or as IVT-RNA, the BiMAB format will contribute towards finding solutions to raise and sustain tumor-specific cellular responses elicited by engaged and activated endogenous T cells. This will potentially enable us to overcome immune evasion mechanisms of tumor cells, consequently supporting current solid gastric cancer therapies.
Resumo:
Rhogocytes, also termed ‘pore cells’, exist free in the hemolymph or embedded in the connective tissue of different body parts of molluscs, notably gastropods. These unique cells can be round, elongated or irregularly shaped, and up to 30 μm in diameter. Their hallmark is the so-called slit apparatus: i.e. pocket-like invaginations of the plasma membrane creating extracellular lacunae, bridged by cytoplasmic bars. These bars form distinctive slits of ca. 20 nm width. A slit diaphragm composed of proteins establishes a molecular sieve with holes of 20 x 20 nm. Different functions have been assigned to this special molluscan cell type, notably biosynthesis of the hemolymph respiratory protein hemocyanin. It has further been proposed, but not proven, that in the case of red-blooded snail species rhogocytes might synthesize the hemoglobin. However, the secretion pathway of these hemolymph proteins, and the functional role of the enigmatic slit apparatus remained unclear. Additionally proposed functions of rhogocytes, such as heavy metal detoxification or hemolymph protein degradation, are also not well studied. This work provides more detailed electron microscopical, histological and immunobiochemical information on the structure and function of rhogocytes of the freshwater snails Biomphalaria glabrata and Lymnaea stagnalis. By in situ hybridization on mantle tissues, it proves that B. glabrata rhogocytes synthesize hemoglobin and L. stagnalis rhogocytes synthesize hemocyanin. Hemocyanin is present, in endoplasmic reticulum lacunae and in vesicles, as individual molecules or pseudo-crystalline arrays. The first 3D reconstructions of rhogocytes are provided by means of electron tomography and show unprecedented details of the slit apparatus. A highly dense material in the cytoplasmic bars close to the diaphragmatic slits was shown, by immunogold labeling, to contain actin. By immunofluorescence microscopy, the protein nephrin was localized at the periphery of rhogocytes. The presence of both proteins in the slit apparatus supports the previous hypothesis, hitherto solely based on similarities of the ultrastructure, that the molluscan rhogocytes are phylogenetically related to mammalian podocytes and insect nephrocytes. A possible secretion pathway of respiratory proteins that includes a transfer mechanism of vesicles through the diaphragmatic slits is proposed and discussed. We also studied, by electron microscopy, the reaction of rhogocytes in situ to two forms of animal stress: deprivation of food and cadmium contamination of the tank water. Significant cellular reactions to both stressors were observed and documented. Notably, the slit apparatus surface and the number of electron-dense cytoplasmic vesicles increased in response to cadmium stress. Food deprivation led to an increase in hemocyanin production. These observations are also discussed in the framework of using such animals as potential environmental biomarkers.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Robben sind amphibische marine Säugetiere. Das bedeutet, dass sie zweirnunterschiedliche Lebensräume, Wasser und Land, bewohnen. Ihre sensorischen Systeme müssen auf beide Medien abgestimmt sein. Gerade für das Sehvermögen ist es eine große Herausforderung, sich den zwei optisch unterschiedlichen Medien anzupassen. Deshalb sind Forscher an dem Sehen von marinen Säugern seit dem zwanzigsten Jahrhundert so sehr interessiert. rnBis heute wird kontrovers diskutiert, ob marine Säugetiere Farbe sehen können, da sie durch einen Gendefekt nur einen Zapfentyp besitzen und somit zu den Zapfen-Monochromaten gehören. Dressurexperimente zeigten jedoch, dass Seebären und Seelöwen in der Lage sind grüne und blaue Testfelder von Graustufen zu unterscheiden (Busch & Dücker, 1987; Griebel & Schmid, 1992).rnUm auszuschließen, dass die Tiere ein Farbensehen über die Unterscheidung von Helligkeit vortäuschen, wurde in der vorliegenden Arbeit zunächst die Kontrasterkennung untersucht und danach Tests auf Farbensehen durchgeführt. Als Versuchstiere dienten zwei Seehunde (Phoca vitulina) und zwei Südafrikanische Seebären (Arctocephalus pusillus). Alle Versuche wurden unter freien Himmel im Zoo Frankfurt durchgeführt. Den Tieren wurden immer drei Testfelder zur Auswahl geboten: zwei waren gleich und zeigten ein homogenen Hintergrund, das dritte zeigte ein Dreieck auf demselben Hintergrund. Die Tiere wurden auf das Dreieck dressiert. In den Versuchen zum Helligkeitskontrast wurden graue Dreiecke auf grauem Hintergrund verwendet. Das Dreieck wurde nicht erkannt bei einem Luminanz-Kontrast (K= LD/(LD+LH)) zwischen 0,03 und -0,12.rnBeim Test auf Farbensehen wurden die Farben Blau, Grün, Gelb und Orange auf grauem Hintergrund verwendet. Die Testreihen zeigten, dass jedes Tier auch in Bereichen von geringem Helligkeitskontrast hohe Wahlhäufigkeiten auf das farbige Dreieck erzielte und somit eindeutig die Farben Blau, Grün und Gelb sehen konnte. Lediglich bei der Farbe Orange kann keine Aussage zum Farbensehen getroffen werden, da das farbige Dreieck immer dunkler war als der Hintergrund. rnZusammenfassend konnte in dieser Arbeit gezeigt werden, dass Seehunde und Seebären in der Lage sind Farbe zu sehen. Vermutlich beruht diese Fähigkeit auf der Interaktion von Stäbchen und Zapfen. rn
Resumo:
Die im Rahmen des ELSA-Projekts des Geowissenschaftlichen Instituts der Johannes Gutenberg-Universität Mainz erbohrten Kerne im Oberwinkler Maar (OW1) und im Jungferweiher Maar (JW3) wurden auf das Vorkommen des periglazialen Deckschichtensystems und dessen anthropogenen Überprägung untersucht. Es gab bis dato noch keine Unternehmungen diese Formen der Bodenbildung und -entwicklung in einem Trockenmaar zu suchen, geschweige denn zu untersuchen. rnDie Ergebnisse zeigen auf, dass sich Deckschichten auch in der Vulkanreliefform eines Trockenmaars ausbilden können und dass die Überprägung je nach Kraterform und anthropogener Flächennutzung unterschiedlich im Bodenprofil in Erscheinung tritt (Mächtigkeit von kolluvialen/alluvialen Lagen oder Anzahl der Holzkohlefunde).rnZur Untersuchung der Deckschichten und deren anthropogenen Überprägung wurden sowohl bodenkundliche Analysen als auch Literatur- und Kartenauswertungen unternommen. Als eine neue Methode zur Identifizierung von den verschiedenen Bodenhorizonten wurde die mikroskopische Analyse eingeführt. Dabei kam es hauptsächlich darauf an, die Minerale der Laacher-See-Tephra (LST) ausfindig zu machen und so die Bodenbildung und -entwicklung nicht nur zeitlich einordnen zu können, sondern auch die verschiedenen Materialeinträge (u.a. Deckschichtenmaterial) an den Profilstandort unterscheiden zu können.rnAls grundlegendes Ergebnis liefert die vorliegende Arbeit den Beweis, dass sich die periglazialen Deckschichten und deren anthropogenen Überprägung nicht nur in den typischen Zonen der deutschen Mittelgebirge ausbilden, sondern auch in der vulkanischen Reliefform eines Trockenmaars. Hinzukommt die Tatsache, dass statt den typischen vielen Geländeaufschlüssen für einen Standort – Catena – jeweils ein einzelner Bohrkern ausgereicht hat, um zu dieser genauen Erkenntnisgewinnung – maßgeblich bedingt durch die mikroskopische Analyse – zu kommen.
Resumo:
Cabaret is deeply rooted in Austrian culture, particularly in Vienna, where nowadays this genre can once again live its glory days thanks to the effort of many cabaret comedians like Michael Niavarani, Roland Düringer, Alfred Dorfer and Andreas Vitásek. The starting point for this work is the show “Sekundenschlaf”, of the Viennese cabaret artist Andreas Vitásek. The core of the show is time, a dimension that is not fixed, but time can both fly as well as lengthen almost endlessly. Moreover, Vitásek also speaks about many current issues, like politics and the economic crisis, but the focus of the show is always the author’s personal experience. By means of this work I wanted to identify the difficulties of a potential translation of the show in order to find out whether such a translation might be possible and effective. I chose the examples that were more significant from a thematic and linguistic point of view, transcribed them directly from the DVD and analyzed them in detail. The translation of cabaret proves to be particularly difficult, as it is essential to convey the humorous elements to the target audience. Although humor belongs to all human beings, it is extremely specific for each culture and language. Therefore it is the job of a translator to manage to build a bridge between the source and the target culture. This work is divided into two major parts, one dedicated to cabaret as artistic genre, and the other one specifically dedicated to the show “Sekundenschlaf”. Through the analysis of the transcribed examples I have identified first the linguistic and then the thematic difficulties, pointing out which cultural elements are specific for Austrian culture and which elements can be understood (almost) everywhere.
Resumo:
This study deals with the determination of the retentive force between primary and secondary telescopic crowns under clinical conditions. Forty-three combined fixed-removable prostheses with a total of 140 double crowns were used for retention force measurement of the telescopic crowns prior to cementation. The crowns had a preparation of 1-2°. A specifically designed measuring device was used. The retentive forces were measured with and without lubrication by a saliva substitute. The measured values were analyzed according to the type of tooth (incisors, canines, premolars, and molars). Additionally, a comparison between lubricated and unlubricated telescopic crowns was done. As maximum retention force value 29.98 N was recorded with a telescopic crown on a molar, while the minimum of 0.08 N was found with a specimen on a canine. The median value of retention force of all telescopic crowns reached 1.93 N with an interquartile distance of 4.35 N. No statistically significant difference between lubricated and unlubricated specimens was found. The results indicate that retention force values of telescopic crowns, measured in clinical practice, are often much lower than those cited in the literature. The measurements also show a wide range. Whether this proves to be a problem for the patient's quality of life or not can however only be established by a comparison of the presented results with a follow-up study involving measurement of intraoral retention and determination by e.g. oral health impact profile.
Resumo:
This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.
Resumo:
The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.
Resumo:
Uncontrollable intracranial pressure elevation in hyperacute liver failure often proves fatal if no suitable liver for transplantation is found in due time. Both ABO-compatible and auxiliary partial orthotopic liver transplantation have been described to control such scenario. However, each method is associated with downsides in terms of immunobiology, organ availability and effects on the overall waiting list.
Resumo:
In 1969, Lovasz asked whether every connected, vertex-transitive graph has a Hamilton path. This question has generated a considerable amount of interest, yet remains vastly open. To date, there exist no known connected, vertex-transitive graph that does not possess a Hamilton path. For the Cayley graphs, a subclass of vertex-transitive graphs, the following conjecture was made: Weak Lovász Conjecture: Every nontrivial, finite, connected Cayley graph is hamiltonian. The Chen-Quimpo Theorem proves that Cayley graphs on abelian groups flourish with Hamilton cycles, thus prompting Alspach to make the following conjecture: Alspach Conjecture: Every 2k-regular, connected Cayley graph on a finite abelian group has a Hamilton decomposition. Alspach’s conjecture is true for k = 1 and 2, but even the case k = 3 is still open. It is this case that this thesis addresses. Chapters 1–3 give introductory material and past work on the conjecture. Chapter 3 investigates the relationship between 6-regular Cayley graphs and associated quotient graphs. A proof of Alspach’s conjecture is given for the odd order case when k = 3. Chapter 4 provides a proof of the conjecture for even order graphs with 3-element connection sets that have an element generating a subgroup of index 2, and having a linear dependency among the other generators. Chapter 5 shows that if Γ = Cay(A, {s1, s2, s3}) is a connected, 6-regular, abelian Cayley graph of even order, and for some1 ≤ i ≤ 3, Δi = Cay(A/(si), {sj1 , sj2}) is 4-regular, and Δi ≄ Cay(ℤ3, {1, 1}), then Γ has a Hamilton decomposition. Alternatively stated, if Γ = Cay(A, S) is a connected, 6-regular, abelian Cayley graph of even order, then Γ has a Hamilton decomposition if S has no involutions, and for some s ∈ S, Cay(A/(s), S) is 4-regular, and of order at least 4. Finally, the Appendices give computational data resulting from C and MAGMA programs used to generate Hamilton decompositions of certain non-isomorphic Cayley graphs on low order abelian groups.
Resumo:
Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.
Resumo:
Nanoparticles are fascinating where physical and optical properties are related to size. Highly controllable synthesis methods and nanoparticle assembly are essential [6] for highly innovative technological applications. Among nanoparticles, nonhomogeneous core-shell nanoparticles (CSnp) have new properties that arise when varying the relative dimensions of the core and the shell. This CSnp structure enables various optical resonances, and engineered energy barriers, in addition to the high charge to surface ratio. Assembly of homogeneous nanoparticles into functional structures has become ubiquitous in biosensors (i.e. optical labeling) [7, 8], nanocoatings [9-13], and electrical circuits [14, 15]. Limited nonhomogenous nanoparticle assembly has only been explored. Many conventional nanoparticle assembly methods exist, but this work explores dielectrophoresis (DEP) as a new method. DEP is particle polarization via non-uniform electric fields while suspended in conductive fluids. Most prior DEP efforts involve microscale particles. Prior work on core-shell nanoparticle assemblies and separately, nanoparticle characterizations with dielectrophoresis and electrorotation [2-5], did not systematically explore particle size, dielectric properties (permittivity and electrical conductivity), shell thickness, particle concentration, medium conductivity, and frequency. This work is the first, to the best of our knowledge, to systematically examine these dielectrophoretic properties for core-shell nanoparticles. Further, we conduct a parametric fitting to traditional core-shell models. These biocompatible core-shell nanoparticles were studied to fill a knowledge gap in the DEP field. Experimental results (chapter 5) first examine medium conductivity, size and shell material dependencies of dielectrophoretic behaviors of spherical CSnp into 2D and 3D particle-assemblies. Chitosan (amino sugar) and poly-L-lysine (amino acid, PLL) CSnp shell materials were custom synthesized around a hollow (gas) core by utilizing a phospholipid micelle around a volatile fluid templating for the shell material; this approach proves to be novel and distinct from conventional core-shell models wherein a conductive core is coated with an insulative shell. Experiments were conducted within a 100 nl chamber housing 100 um wide Ti/Au quadrapole electrodes spaced 25 um apart. Frequencies from 100kHz to 80MHz at fixed local field of 5Vpp were tested with 10-5 and 10-3 S/m medium conductivities for 25 seconds. Dielectrophoretic responses of ~220 and 340(or ~400) nm chitosan or PLL CSnp were compiled as a function of medium conductivity, size and shell material.
Resumo:
PURPOSE: The aim of this study was to analyze prosthetic maintenance in partially edentulous patients with removable prostheses supported by teeth and strategic implants. MATERIALS AND METHODS: Sixty patients with removable partial prostheses and combined tooth-implant support were identified within the time period from 1998 to 2006. One group consisted of 42 patients (planned group) with a reduced residual dentition and in need of removable partial dentures (RPDs) or overdentures in the maxilla and/or mandible. They were admitted consecutively for treatment. Due to missing teeth in strategic important positions, one or two implants were placed to improve symmetrical denture support and retention. The majority of residual teeth exhibited an impaired structural integrity and therefore were provided with root copings for denture retention. A few vital teeth were used for telescopic crowns. The anchorage system for the strategic implants was selected accordingly. A second group of 18 patients (repair group) wearing RPDs with the loss of one abutment tooth due to biologic or mechanical failure was identified. These abutment teeth were replaced by 21 implants, and patients continued to wear their original prostheses. The observation time for planned and repair groups was 12 months to 8 years. All patients followed a regular maintenance schedule. Technical or biologic complications with supporting teeth or implants and prosthetic service were registered regularly. RESULTS: Three maxillary implants were lost after loading and three roots with copings had to be removed. Biologic problems included caries and periodontal/peri-implant infection with a significantly higher incidence in the repair group (P < .05). Technical complications with the dentures were rather frequent in both groups, mostly related to the anchorage system (matrices) of root copings and implants. Maintenance and complications were observed more frequently in the first year after delivery of the denture than in the following 3 years (P < .05). No denture had to be remade. CONCLUSIONS: The placement of a few implants allows for maintaining a compromised residual dentition for support of RPDs. The combination of root and implant support facilitates treatment planning and enhances designing the removable denture. It also proves to be a practical rescue method. Technical problems with the anchorage system were frequent, particularly in the first year after delivery of the dentures.
Resumo:
PURPOSE: Resonance frequency analysis (RFA) offers the opportunity to monitor the osseointegration of an implant in a simple, noninvasive way. A better comprehension of the relationship between RFA and parameters related to bone quality would therefore help clinicians improve diagnoses. In this study, a bone analog made from polyurethane foam was used to isolate the influences of bone density and cortical thickness in RFA. MATERIALS AND METHODS: Straumann standard implants were inserted in polyurethane foam blocks, and primary implant stability was measured with RFA. The blocks were composed of two superimposed layers with different densities. The top layer was dense to mimic cortical bone, whereas the bottom layer had a lower density to represent trabecular bone. Different densities for both layers and different thicknesses for the simulated cortical layer were tested, resulting in eight different block combinations. RFA was compared with two other mechanical evaluations of primary stability: removal torque and axial loading response. RESULTS: The primary stability measured with RFA did not correlate with the two other methods, but there was a significant correlation between removal torque and the axial loading response (P < .005). Statistical analysis revealed that each method was sensitive to different aspects of bone quality. RFA was the only method able to detect changes in both bone density and cortical thickness. However, changes in trabecular bone density were easier to distinguish with removal torque and axial loading than with RFA. CONCLUSIONS: This study shows that RFA, removal torque, and axial loading are sensitive to different aspects of the bone-implant interface. This explains the absence of correlation among the methods and proves that no standard procedure exists for the evaluation of primary stability.