891 resultados para Site-specific application
Resumo:
The current high competition on Citrus industry demands from growers new management technologies for superior efficiency and sustainability. In this context, precision agriculture (PA) has developed techniques based on yield mapping and management systems that recognize field spatial variability, which contribute to increase profitability of commercial crops. Because spatial variability is often not perceived the orange orchards are still managed as uniform and adoption of PA technology on citrus farms is low. Thus, the objective of the present study was to characterize the spatial variability of three factors: fruit yield, soil fertility and occurrence of plant gaps caused by either citrus blight or huanglongbing (HLB) in a commercial Valencia orchard in Brotas, São Paulo State, Brazil. Data from volume, geographic coordinates and representative area of the bags used on harvest were recorded to generate yield points that were then interpolated to produce the yield map. Soil chemical characteristics were studied by analyzing samples collected along planting rows and inter-rows in 24 points distributed in the field. A map of density of tree gaps was produced by georeferencing individual gaps and later by counting the number of gaps within 500 m² cells. Data were submitted to statistical and geostatistical analyses. A t test was used to compare means of soil chemical characteristics between sampling regions. High variation on yield and density of tree gaps was observed from the maps. It was also demonstrated overlapping regions of high density of plant absence and low fruit yield. Soil fertility varied depending on the sampling region in the orchard. The spatial variability found on yield, soil fertility and on disease occurrence demonstrated the importance to adopt site specific nutrient management and disease control as tools to guarantee efficiency of fruit production.
Resumo:
The object of the present study is the process of gas transport in nano-sized materials, i.e. systems having structural elements of the order of nanometers. The aim of this work is to advance the understanding of the gas transport mechanism in such materials, for which traditional models are not often suitable, by providing a correct interpretation of the relationship between diffusive phenomena and structural features. This result would allow the development new materials with permeation properties tailored on the specific application, especially in packaging systems. The methods used to achieve this goal were a detailed experimental characterization and different simulation methods. The experimental campaign regarded the determination of oxygen permeability and diffusivity in different sets of organic-inorganic hybrid coatings prepared via sol-gel technique. The polymeric samples coated with these hybrid layers experienced a remarkable enhancement of the barrier properties, which was explained by the strong interconnection at the nano-scale between the organic moiety and silica domains. An analogous characterization was performed on microfibrillated cellulose films, which presented remarkable barrier effect toward oxygen when it is dry, while in the presence of water the performance significantly drops. The very low value of water diffusivity at low activities is also an interesting characteristic which deals with its structural properties. Two different approaches of simulation were then considered: the diffusion of oxygen through polymer-layered silicates was modeled on a continuum scale with a CFD software, while the properties of n-alkanthiolate self assembled monolayers on gold were analyzed from a molecular point of view by means of a molecular dynamics algorithm. Modeling transport properties in layered nanocomposites, resulting from the ordered dispersion of impermeable flakes in a 2-D matrix, allowed the calculation of the enhancement of barrier effect in relation with platelets structural parameters leading to derive a new expression. On this basis, randomly distributed systems were simulated and the results were analyzed to evaluate the different contributions to the overall effect. The study of more realistic three-dimensional geometries revealed a prefect correspondence with the 2-D approximation. A completely different approach was applied to simulate the effect of temperature on the oxygen transport through self assembled monolayers; the structural information obtained from equilibrium MD simulations showed that raising the temperature, makes the monolayer less ordered and consequently less crystalline. This disorder produces a decrease in the barrier free energy and it lowers the overall resistance to oxygen diffusion, making the monolayer more permeable to small molecules.
Resumo:
Zusammenfassung Mittels Fluoreszenzfarbstoffen können Strukturen sichtbar gemacht werden, die auf kon-ventionellem Weg nicht, oder nur schwer darzustellen sind. Besonders in Kombination mit der Konfokalen Laser Scanning Mikroskopie eröffnen sich neue Wege zum spezifischen Nachweis unterschiedlichster Komponenten biologischer Proben und gegebenenfalls deren dreidimensionale Widergabe.Die Visualisierung des Proteinanteils des Zahnhartgewebes kann mit Hilfe chemisch kopplungsfähiger Fluorochrome durchgeführt werden. Um zu zeigen, daß es sich bei dieser Markierung nicht um unspezifische Adsorption des Farbstoffes handelt, wurde zur Kontrolle die Proteinkomponente der Zahnproben durch enzymatischen Verdau beseitigt. Derartig behandelte Präparate wiesen eine sehr geringe Anfärbbarkeit auf.Weiterführend diente diese enzymatische Methode als Negativkontrolle zum Nachweis der Odontoblastenfortsätze im Dentin bzw. im Bereich der Schmelz-Dentin-Grenze. Hiermit konnte differenziert werden zwischen reinen Reflexionsbildern der Dentinkanäle und den Zellausläufern deren Membranen gezielt durch lipophile Fluoreszenzfarbstoffe markiert wurden.In einem weiteren Ansatz konnte gezeigt werden, daß reduzierte und daher nichtfluoreszente Fluoresceinabkömmlinge geeignet sind, die Penetration von Oxidationsmitteln (hier H2O2) in den Zahn nachzuweisen. Durch Oxidation dieser Verbindungen werden fluoreszierende Produkte generiert, die den Nachweis lieferten, daß die als Zahnbleichmittel eingesetzten Mittel rasch durch Schmelz und Dentin bis in die Pulpahöhle gelangen können.Die Abhängigkeit der Fluoreszenz bestimmter Fluorochrome von deren chemischer Um-gebung, im vorliegenden Fall dem pH-Wert, sollte eingesetzt werden, um den Säuregrad im Zahninneren fluoreszenzmikroskopisch darzustellen. Hierbei wurde versucht, ein ratio-metrisches Verfahren zu entwickeln, mit dem die pH-Bestimmung unter Verwendung eines pH-abhängigen und eines pH-unabhängigen Fluorochroms erfolgt. Diese Methode konnte nicht für diese spezielle Anwendung verifiziert werden, da Neutralisationseffekte der mineralischen Zahnsubstanz (Hydroxylapatit) die pH-Verteilung innerhalb der Probe beeinflußen. Fluoreszenztechniken wurden ebenfalls ergänzend eingesetzt zur Charakterisierung von kovalent modifizierten Implantatoberflächen. Die, durch Silanisierung von Titantestkörpern mit Triethoxyaminopropylsilan eingeführten freien Aminogruppen konnten qualitativ durch den Einsatz eines aminspezifischen Farbstoffes identifiziert werden. Diese Art der Funktionalisierung dient dem Zweck, Implantatoberflächen durch chemische Kopplung adhäsionsvermittelnder Proteine bzw. Peptide dem Einheilungsprozeß von Implantaten in den Knochen zugänglicher zu machen, indem knochenbildende Zellen zu verbessertem Anwachsverhalten stimuliert werden. Die Zellzahlbestimmung im Adhäsionstest wurde ebenfalls mittels Fluoreszenzfarbstoffen durchgeführt und lieferte Ergebnisse, die belegen, daß die durchgeführte Modifizierung einen günstigen Einfluß auf die Zelladhäsion besitzt.
Resumo:
The aim of this thesis was to describe the development of motion analysis protocols for applications on upper and lower limb extremities, by using inertial sensors-based systems. Inertial sensors-based systems are relatively recent. Knowledge and development of methods and algorithms for the use of such systems for clinical purposes is therefore limited if compared with stereophotogrammetry. However, their advantages in terms of low cost, portability, small size, are a valid reason to follow this direction. When developing motion analysis protocols based on inertial sensors, attention must be given to several aspects, like the accuracy of inertial sensors-based systems and their reliability. The need to develop specific algorithms/methods and software for using these systems for specific applications, is as much important as the development of motion analysis protocols based on them. For this reason, the goal of the 3-years research project described in this thesis was achieved first of all trying to correctly design the protocols based on inertial sensors, in terms of exploring and developing which features were suitable for the specific application of the protocols. The use of optoelectronic systems was necessary because they provided a gold standard and accurate measurement, which was used as a reference for the validation of the protocols based on inertial sensors. The protocols described in this thesis can be particularly helpful for rehabilitation centers in which the high cost of instrumentation or the limited working areas do not allow the use of stereophotogrammetry. Moreover, many applications requiring upper and lower limb motion analysis to be performed outside the laboratories will benefit from these protocols, for example performing gait analysis along the corridors. Out of the buildings, the condition of steady-state walking or the behavior of the prosthetic devices when encountering slopes or obstacles during walking can also be assessed. The application of inertial sensors on lower limb amputees presents conditions which are challenging for magnetometer-based systems, due to ferromagnetic material commonly adopted for the construction of idraulic components or motors. INAIL Prostheses Centre stimulated and, together with Xsens Technologies B.V. supported the development of additional methods for improving the accuracy of MTx in measuring the 3D kinematics for lower limb prostheses, with the results provided in this thesis. In the author’s opinion, this thesis and the motion analysis protocols based on inertial sensors here described, are a demonstration of how a strict collaboration between the industry, the clinical centers, the research laboratories, can improve the knowledge, exchange know-how, with the common goal to develop new application-oriented systems.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The general objective of this research is to explore theories and methodologies of sustainability indicators, environmental management and decision making disciplines with the operational purpose of producing scientific, robust and relevant information for supporting system understanding and decision making in real case studies. Several tools have been applied in order to increase the understanding of socio-ecological systems as well as providing relevant information on the choice between alternatives. These tools have always been applied having in mind the complexity of the issues and the uncertainty tied to the partial knowledge of the systems under study. Two case studies with specific application to performances measurement (environmental performances in the case of the K8 approach and sustainable development performances in the case of the EU Sustainable Development Strategy) and a case study about the selection of sustainable development indicators amongst Municipalities in Scotland, are discussed in the first part of the work. In the second part of the work, the common denominator among subjects consists in the application of spatial indices and indicators to address operational problems in land use management within the territory of the Ravenna province (Italy). The main conclusion of the thesis is that a ‘perfect’ methodological approach which always produces the best results in assessing sustainability performances does not exist. Rather, there is a pool of correct approaches answering different evaluation questions, to be used when methodologies fit the purpose of the analysis. For this reason, methodological limits and conceptual assumptions as well as consistency and transparency of the assessment, become the key factors for assessing the quality of the analysis.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit rechtlichen Fragestellungen rund um Bewertungs-portale im Internet. Zentrale Themen der Arbeit sind dabei die Zulässigkeit der Veröffentlichung der von den Nutzern abgegebenen Bewertungen vor dem Hintergrund möglicherweise entgegenstehender datenschutzrechtlicher Bestimmungen und der Persönlichkeitsrechte der Betroffenen. Des weiteren wird der Rechtsschutz der Betroffenen erörtert und in diesem Zusammenhang die haftungsrechtlichen Risiken der Forenbetreiber untersucht. Gegenstand der Arbeit sind dabei sowohl Online-Marktplätze wie eBay, auf denen sowohl der Bewertende als auch der Bewertete registriert und mit dem Bewertungsverfahren grundsätz-lich einverstanden sind (geschlossene Portale), als auch Portale, auf denen – oftmals unter einem Pseudonym und ohne vorherige Anmeldung – eine freie Bewertungsabgabe, zu Pro-dukteigenschaften, Dienstleistungen bis hinzu Persönlichkeitsmerkmalen des Bewerteten möglich ist (offene Portale). Einleitung und Erster Teil Nach einer Einleitung und Einführung in die Problematik werden im ersten Teil die verschie-denen Arten der Bewertungsportale kurz vorgestellt. Die Arbeit unterscheidet dabei zwischen so genannten geschlossenen Portalen (transaktionsbegleitende Portale wie eBay oder Ama-zon) auf der einen Seite und offenen Portalen (Produktbewertungsportale, Hotelbewertungs-portale und Dienstleistungsbewertungsportale) auf der anderen Seite. Zweiter Teil Im zweiten Teil geht die Arbeit der Frage nach, ob die Veröffentlichung der durch die Nutzer abgegebenen Bewertungen auf den offenen Portalen überhaupt erlaubt ist oder ob hier mögli-cherweise das Persönlichkeitsrecht der Betroffenen und hier insbesondere das Recht auf in-formationelle Selbstbestimmung in Form der datenschutzrechtlichen Bestimmungen die freie Bewertungsabgabe unzulässig werden lässt. Untersucht werden in diesem Zusammenhang im einzelnen Löschungs- bzw. Beseitigungsan-sprüche der Betroffenen aus § 35 Abs. 2 Satz 2 Nr. 1 BDSG bzw. §§ 1004 i. V. m. 823 Abs. 1 BGB (allgemeines Persönlichkeitsrecht). Die Arbeit kommt in datenschutzrechtlicher Hinsicht zu dem Schluss, dass die Bewertungen personenbezogene Daten darstellen, die den datenschutzrechtlichen Bestimmungen unterlie-gen und eine Veröffentlichung der Bewertungen nach dem im deutschen Recht geltenden da-tenschutzrechtlichen Erlaubnisvorbehalt grundsätzlich nicht in Betracht kommt. Vor dem Hintergrund dieser den tatsächlichen Gegebenheiten und Interessenlagen im Internet nicht mehr gerecht werdenden Gesetzeslage diskutiert der Autor sodann die Frage, ob die datenschutzrechtlichen Bestimmungen in diesen Fällen eine Einschränkung durch die grund-gesetzlich garantierten Informationsfreiheiten erfahren müssen. Nach einer ausführlichen Diskussion der Rechtslage, in der auf die Besonderheiten der ein-zelnen Portale eingegangen wird, kommt die Arbeit zu dem Schluss, dass die Frage der Zuläs-sigkeit der Veröffentlichung der Bewertungen von einer Interessenabwägung im Einzelfall abhängt. Als Grundsatz kann jedoch gelten: Ist die bewertete Tätigkeit oder Person in Bezug auf die bewertete Eigenschaft ohnehin einer breiten Öffentlichkeit zugänglich, erscheint eine Veröffentlichung der Daten nicht bedenklich. Dagegen wird man einen Löschungs- bzw. Be-seitigungsanspruch bejahen müssen für die Bewertungen, die Tätigkeiten oder Eigenschaften des Bewerteten, die in keinem Zusammenhang mit ihm als öffentlicher Person stehen, betref-fen. Anschließend geht die Arbeit auf die Persönlichkeitsrechte der Betroffenen und der sich hier-aus ergebenden Beseitigungs- und Unterlassungsansprüchen gemäß der §§ 1004 Abs. 1, 823 Abs. 1 BGB ein, verneint jedoch wegen dem Vorrang der spezialgesetzlichen Bestimmungen aus dem Bundesdatenschutzgesetz letztlich eine Anwendbarkeit der Anspruchsgrundlagen. Schließlich wird in diesem Teil noch kurz auf die Zulässigkeit der Bewertung juristischer Per-sonen eingegangen, die im Grundsatz bejaht wird. Dritter Teil Sofern der zweite Teil der Arbeit zu dem Schluss kommt, dass die Veröffentlichung der Be-wertungen zulässig ist, stellt sich im dritten Teil die Frage, welche Möglichkeiten das Recht dem Bewerteten bietet, gegen negative Bewertungen vorzugehen. Untersucht werden, dabei datenschutzrechtliche, deliktsrechtliche, vertragliche und wettbe-werbsrechtliche Ansprüche. Ein Schwerpunkt dieses Teils liegt in der Darstellung der aktuellen Rechtsprechung zu der Frage wann eine Bewertung eine Tatsachenbehauptung bzw. ein Werturteil darstellt und den sich hieraus ergebenden unterschiedlichen Konsequenzen für den Unterlassungsanspruch des Betroffenen. Diejenigen Bewertungen, die eine Meinungsäußerung darstellen, unterstehen dem starken Schutz der Meinungsäußerungsfreiheit. Grenze der Zulässigkeit sind hier im wesentlichen nur die Schmähkritik und Beleidigung. An Tatsachenbehautpungen dagegen sind schärfere Maßstäbe anzulegen. In diesem Zusammenhang wird der Frage nachgegangen, ob vertragliche Beziehungen zwischen den Beteiligten (Bewertenden, Bewertete und Portalbetreiber) die Meinungsäußerungsfreiheit einschränkt, was jedenfalls für die geschlossenen Portale bejaht wird. Vierter Teil Der vierte Teil der Arbeit beschäftigt sich mit den „Zu-gut-Bewertungen“. Es geht dabei um wettbewerbsrechtliche Ansprüche im Falle verdeckter Eigenbewertungen. Solche Eigenbewertungen, die unter dem Deckmantel der Pseudonymität als Werbemittel zur Imageverbesserung in entsprechenden Bewertungsportale verbreitet werden ohne den wahren Autor erkennen zu lassen, sind in wettbewerbsrechtlicher Hinsicht grundsätzlich unzulässig. Fünfter Teil Im letzten Teil der Arbeit wird schließlich der Frage nach der Verantwortlichkeit der Portal-betreiber für rechtswidrige Bewertungen nachgegangen. Zunächst wird die Feststellung getroffen, dass es sich bei den von den Nutzern abgegebenen Bewertungen um fremde Inhalte handelt und somit die Haftungsprivilegierungen der § 11 Abs. 1 TDG, § 9 Abs. 1 MDStV eingreifen, wonach die Forenbetreiber für die rechtswidrigen Bewertungen jedenfalls so lange nicht verantwortlich sind, solange sie hiervon keine Kenntnis haben. Da von dieser Haftungsprivilegierung nach der Rechtsprechung des Bundesgerichtshofs die Störerhaftung nicht umfasst ist, wird die Reichweite die Forenbetreiber aus der Störerhaftung treffenden Überwachungspflichten diskutiert. Die Arbeit kommt hier zu dem Ergebnis, dass in den Fällen, in denen dem Adressaten der Bewertung die Identität des Verfassers bekannt ist, sich die Verpflichtungen der Forenbetrei-ber auf die Beseitigung bzw. Sperrung der rechtswidrigen Bewertung beschränken. Sofern die Identität des Bewertenden unbekannt ist, haften die Forenbetreiber als Mitstörer und dem Be-troffenen stehen Unterlassungsansprüche auch gegen die Forenbetreiber zu.
Resumo:
In Rahmen der vorliegenden Arbeit wurde ein neuartiger Zugang zu einer Vielzahl von Polymerstrukturen auf Basis des klinisch zugelassenen Polymers Poly(N-(2-Hydroxypropyl)-methacrylamide) (PHPMA) entwickelt. Der synthetische Zugang beruht zum einen auf der Verwendung von Reaktivesterpolymeren und zum anderen auf der Reversible Addition Fragmentation Chain Transfer (RAFT) Polymerisationsmethode. Diese Form einer kontrollierten radikalischen Polymerisation ermöglichte es, neben der Synthese von besser definierten Homopolymeren auch statistische und Blockcopolymere herzustellen. Die Reaktivesterpolymere können durch einfache Aminolyse in HPMA-basierte Systeme überführt werden. Somit können sie als eine vielversprechende Basis zur Synthese von umfangreichen Polymerbibliotheken angesehen werden. Die hergestellten Polymere kombinieren verschiedene Funktionalitäten bei konstantem Polymerisationsgrad. Dies ermöglicht eine Optimierung auf eine gezielte Anwendung hin ohne den Parameter der Kettenlänge zu verändern.rnIm weiteren war es durch Verwendung der RAFT Polymerisation möglich partiell bioabbaubare Blockcopolymere auf Basis von Polylactiden und HPMA herzustellen, in dem ein Kettentransferreagenz (CTA) an ein wohl definiertes Polylactid Homopolymer gekoppelt wurde. Diese Strukturen wurden in ihrer Zusammensetzung variiert und mit Erkennungsstrukturen (Folaten) und markierenden Elementen (Fluoreszenzfarbstoffe und +-emittierenden Radionukleide) versehen und im weiteren in vitro und in vivo evaluiert.rnAuf Grund dieser Errungenschaften war es möglich den Einfluss der Polymermikrostruktur auf das Aggregationsverhalten hin mittel Lichtstreuung und Fluoreszenzkorrelationsspektroskopie zu untersuchen. Es konnte gezeigt werden, dass erst diese Informationen über die Überstrukturbildung die Kinetik der Zellaufnahme erklären können. Somit wurde die wichtige Rolle von Strukturwirkungsbeziehungen nachgewiesen.rnSomit konnte neben der Synthese, Charakterisierung und ersten biologischen Evaluierungen ein Beitrag zum besseres Verständnis zur Interaktion von polymeren Partikeln mit biologischen Systemen geleistet werden.
Resumo:
Plasmodium cysteine proteases are essential for host-cell invasion and egress, hemoglobin degradation, and intracellular development of the parasite. The temporal, site-specific regulation of cysteine-protease activity is a prerequisite for survival and propagation of Plasmodium. Recently, a new family of inhibitors of cysteine proteases (ICPs) with homologs in at least eight Plasmodium species has been identified. Here, we report the 2.6 A X-ray crystal structure of the C-terminal, inhibitory domain of ICP from P. berghei (PbICP-C) in a 1:1 complex with falcipain-2, an important hemoglobinase of Plasmodium. The structure establishes Plasmodium ICP as a member of the I42 class of chagasin-like protease inhibitors but with large insertions and differences in the binding mode relative to other family members. Furthermore, the PbICP-C structure explains why host-cell cathepsin B-like proteases and, most likely, also the protease-like domain of Plasmodium SERA5 (serine-repeat antigen 5) are no targets for ICP.
Resumo:
Gill disease in salmonids is characterized by a multifactorial aetiology. Epitheliocystis of the gill lamellae caused by obligate intracellular bacteria of the order Chlamydiales is one known factor; however, their diversity has greatly complicated analyses to establish a causal relationship. In addition, tracing infections to a potential environmental source is currently impossible. In this study, we address these questions by investigating a wild brown trout (Salmo trutta) population from seven different sites within a Swiss river system. One age class of fish was followed over 18 months. Epitheliocystis occurred in a site-specific pattern, associated with peak water temperatures during summer months. No evidence of a persistent infection was found within the brown trout population, implying an as yet unknown environmental source. For the first time, we detected 'Candidatus Piscichlamydia salmonis' and 'Candidatus Clavochlamydia salmonicola' infections in the same salmonid population, including dual infections within the same fish. These organisms are strongly implicated in gill disease of caged Atlantic salmon in Norway and Ireland. The absence of aquaculture production within this river system and the distance from the sea, suggests a freshwater origin for both these bacteria and offers new possibilities to explore their ecology free from aquaculture influences.
Resumo:
Click chemistry is a powerful technology for the functionalization of therapeutic proteins with effector moieties, because of its potential for bio-orthogonal, regio-selective, and high-yielding conjugation under mild conditions. Designed Ankyrin Repeat Proteins (DARPins), a novel class of highly stable binding proteins, are particularly well suited for the introduction of clickable methionine surrogates such as azidohomoalanine (Aha) or homopropargylglycine (Hpg), since the DARPin scaffold can be made methionine-free by an M34L mutation in the N-cap which fully maintains the biophysical properties of the protein. A single N-terminal azidohomoalanine, replacing the initiator Met, is incorporated in high yield, and allows preparation of "clickable" DARPins at about 30 mg per liter E. coli culture, fully retaining stability, specificity, and affinity. For a second modification, we introduced a cysteine at the C-terminus. Such DARPins could be conveniently site-specifically linked to two moieties, polyethylene glycol (PEG) to the N-terminus and the fluorophore Alexa488 to the C-terminus. We present a DARPin selected against the epithelial cell adhesion molecule (EpCAM) with excellent properties for tumor targeting as an example. We used these doubly modified molecules to measure binding kinetics on tumor cells and found that PEGylation has no effect on dissociation rate, but slightly decreases the association rate and the maximal number of cell-bound DARPins, fully consistent with our previous model of PEG action obtained in vitro. Our data demonstrate the benefit of click chemistry for site-specific modification of binding proteins like DARPins to conveniently add several functional moieties simultaneously for various biomedical applications.
Resumo:
During the resolution of inflammatory responses, neutrophils rapidly undergo apoptosis. A direct and fast activation of caspase-8 by cathepsin D was shown to be crucial in the initial steps of neutrophil apoptosis. Nevertheless, the activation mechanism of caspase-8 remains unclear. Here, by using site-specific mutants of caspase-8, we show that both cathepsin D-mediated proteolysis and homodimerization of caspase-8 are necessary to generate an active caspase-8. At acidic pH, cathepsin D specifically cleaved caspase-8 but not the initiator caspase-9 or -10 and significantly increased caspase-8 activity in dimerizing conditions. These events were completely abolished by pepstatin A, a pharmacological inhibitor of cathepsin D. The cathepsin D intra-chain proteolysis greatly stabilized the active site of caspase-8. Moreover, the main caspase-8 fragment generated by cathepsin D cleavage could be affinity-labeled with the active site probe biotin-VAD-fluoromethyl ketone, suggesting that this fragment is enzymatically active. Importantly, in an in vitro cell-free assay, the addition of recombinant human caspase-8 protein, pre-cleaved by cathepsin D, was followed by caspase-3 activation. Our data therefore indicate that cathepsin D is able to initiate the caspase cascade by direct activation of caspase-8. As cathepsin D is ubiquitously expressed, this may represent a general mechanism to induce apoptosis in a variety of immune and nonimmune cells.
Resumo:
The interaction of immunoglobulin E (IgE) antibodies with the high-affinity receptor, FcεRI, plays a central role in initiating most allergic reactions. The IgE-receptor interaction has been targeted for treatment of allergic diseases, and many high-affinity macromolecular inhibitors have been identified. Small molecule inhibitors would offer significant advantages over current anti-IgE treatment, but no candidate compounds have been identified and fully validated. Here, we report the development of a time-resolved fluorescence resonance energy transfer (TR-FRET) assay for monitoring the IgE-receptor interaction. The TR-FRET assay measures an increase in fluorescence intensity as a donor lanthanide fluorophore is recruited into complexes of site-specific Alexa Fluor 488-labeled IgE-Fc and His-tagged FcεRIα proteins. The assay can readily monitor classic competitive inhibitors that bind either IgE-Fc or FcεRIα in equilibrium competition binding experiments. Furthermore, the TR-FRET assay can also be used to follow the kinetics of IgE-Fc-FcεRIα dissociation and identify inhibitory ligands that accelerate the dissociation of preformed complexes, as demonstrated for an engineered DARPin (designed ankyrin repeat protein) inhibitor. The TR-FRET assay is suitable for high-throughput screening (HTS), as shown by performing a pilot screen of the National Institutes of Health (NIH) Clinical Collection Library in a 384-well plate format.
Resumo:
Classical antibody-based serotyping of Escherichia coli is an important method in diagnostic microbiology for epidemiological purposes, as well as for a rough virulence assessment. However, serotyping is so tedious that its use is restricted to a few reference laboratories. To improve this situation we developed and validated a genetic approach for serotyping based on the microarray technology. The genes encoding the O-antigen flippase (wzx) and the O-antigen polymerase (wzy) were selected as target sequences for the O antigen, whereas fliC and related genes, which code for the flagellar monomer, were chosen as representatives for the H phenotype. Starting with a detailed bioinformatic analysis and oligonucleotide design, an ArrayTube-based assay was established: a fast and robust DNA extraction method was coupled with a site-specific, linear multiplex labeling procedure and hybridization analysis of the biotinylated amplicons. The microarray contained oligonucleotide DNA probes, each in duplicate, representing 24 of the epidemiologically most relevant of the over 180 known O antigens (O antigens 4, 6 to 9, 15, 26, 52, 53, 55, 79, 86, 91, 101, 103, 104, 111, 113, 114, 121, 128, 145, 157, and 172) as well as 47 of the 53 different H antigens (H antigens 1 to 12, 14 to 16, 18 to 21, 23 to 34, 37 to 43, 45, 46, 48, 49, 51 to 54, and 56). Evaluation of the microarray with a set of defined strains representing all O and H serotypes covered revealed that it has a high sensitivity and a high specificity. All of the conventionally typed 24 O groups and all of the 47 H serotypes were correctly identified. Moreover, strains which were nonmotile or nontypeable by previous serotyping assays yielded unequivocal results with the novel ArrayTube assay, which proved to be a valuable alternative to classical serotyping, allowing processing of single colonies within a single working day.