827 resultados para efficient vulcanisation (EV)
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
In this present work we present a methodology that aims to apply the many-body expansion to decrease the computational cost of ab initio molecular dynamics, keeping acceptable accuracy on the results. We implemented this methodology in a program which we called ManBo. In the many-body expansion approach, we partitioned the total energy E of the system in contributions of one body, two bodies, three bodies, etc., until the contribution of the Nth body [1-3]: E = E1 + E2 + E3 + …EN. The E1 term is the sum of the internal energy of the molecules; the term E2 is the energy due to interaction between all pairs of molecules; E3 is the energy due to interaction between all trios of molecules; and so on. In Manbo we chose to truncate the expansion in the contribution of two or three bodies, both for the calculation of the energy and for the calculation of the atomic forces. In order to partially include the many-body interactions neglected when we truncate the expansion, we can include an electrostatic embedding in the electronic structure calculations, instead of considering the monomers, pairs and trios as isolated molecules in space. In simulations we made we chose to simulate water molecules, and use the Gaussian 09 as external program to calculate the atomic forces and energy of the system, as well as reference program for analyzing the accuracy of the results obtained with the ManBo. The results show that the use of the many-body expansion seems to be an interesting approach for reducing the still prohibitive computational cost of ab initio molecular dynamics. The errors introduced on atomic forces in applying such methodology are very small. The inclusion of an embedding electrostatic seems to be a good solution for improving the results with only a small increase in simulation time. As we increase the level of calculation, the simulation time of ManBo tends to largely decrease in relation to a conventional BOMD simulation of Gaussian, due to better scalability of the methodology presented. References [1] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 3, 46 (2007). [2] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 4, 1 (2008). [3] R. Rivelino, P. Chaudhuri and S. Canuto; J. Chem. Phys., 118, 10593 (2003).
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.
Resumo:
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above '10 POT. 18' eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments.Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above '10 POT. 18' eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.
Resumo:
This work reports on the construction and spectroscopic analyses of optical micro-cavities (OMCs) that efficiently emit at ~1535 nm. The emission wavelength matches the third transmission window of commercial optical fibers and the OMCs were entirely based on silicon. The sputtering deposition method was adopted in the preparation of the OMCs, which comprised two Bragg reflectors and one spacer layer made of either Er- or ErYb-doped amorphous silicon nitride. The luminescence signal extracted from the OMCs originated from the 4I13/2→4I15/2 transition (due to Er3+ ions) and its intensity showed to be highly dependent on the presence of Yb3+ ions.According to the results, the Er3+-related light emission was improved by a factor of 48 when combined with Yb3+ ions and inserted in the spacer layer of the OMC. The results also showed the effectiveness of the present experimental approach in producing Si-based light-emitting structures in which the main characteristics are: (a) compatibility with the actual microelectronics industry, (b) the deposition of optical quality layers with accurate composition control, and (c) no need of uncommon elements-compounds nor extensive thermal treatments. Along with the fundamental characteristics of the OMCs, this work also discusses the impact of the Er3+-Yb3+ ion interaction on the emission intensity as well as the potential of the present findings.
Resumo:
Objetivo: o objetivo deste trabalho foi estudar os aspectos morfológicos e ultra-estruturais na gênese de capilares sanguíneos em músculo esquelético do membro caudal de ratos submetidos à isquemia sob a ação da Prostaglandina E1 (PGE1), administrada por via intramuscular ou endovenosa. Métodos: foram utilizados 60 ratos (Rattus norvegicus albinus), linhagem Wistar-UEM, distribuídos aleatoriamente em três grupos de 20, redistribuídos igualmente em dois subgrupos, observados no 7o e 14o dias, sendo um grupo controle onde apenas foi provocada a isquemia no membro, outro com a isquemia e a injeção da PGE1 via intramuscular (IM), e outro com a isquemia e a injeção da PGE1 endovenosa (EV). Para análise dos resultados, foram realizadas a coloração com hematoxilina & eosina (HE), a imuno-histoquímica e a microscopia eletrônica de transmissão (MET). Resultados: constatou-se um aumento estatisticamente significante no número de capilares nos subgrupos com o uso da PGE1 IM e EV, através da contagem nos cortes corados com HE. Houve marcação de capilares e vasos de maior calibre nestes mesmos subgrupos, porém, esta reação não foi eficiente para a quantificação dos capilares. Na MET encontraram-se evidências de formação de novos capilares. Conclusões: a PGE1, administrada por via IM ou EV, promoveu, após 14 dias de observação, um aumento no número de capilares no músculo esquelético de ratos submetido à isquemia, identificáveis histologicamente com a coloração em HE. Na análise ultra-estrutural encontraram-se alterações que sugerem, nos animais sob a ação da PGE1, que a neoformação vascular possa ter ocorrido por angiogênese e vasculogênese. A imuno-coloração, apesar da marcação de capilares e vasos maiores, não permitiu estabelecer uma correlação com o aumento de vasos encontrados na coloração com HE.
Resumo:
The irrigation scheme Eduardo Mondlane, situated in Chókwè District - in the Southern part of the Gaza province and within the Limpopo River Basin - is the largest in the country, covering approximately 30,000 hectares of land. Built by the Portuguese colonial administration in the 1950s to exploit the agricultural potential of the area through cash-cropping, after Independence it became one of Frelimo’s flagship projects aiming at the “socialization of the countryside” and at agricultural economic development through the creation of a state farm and of several cooperatives. The failure of Frelimo’s economic reforms, several infrastructural constraints and local farmers resistance to collective forms of production led to scheme to a state of severe degradation aggravated by the floods of the year 2000. A project of technical rehabilitation initiated after the floods is currently accompanied by a strong “efficiency” discourse from the managing institution that strongly opposes the use of irrigated land for subsistence agriculture, historically a major livelihood strategy for smallfarmers, particularly for women. In fact, the area has been characterized, since the end of the XIX century, by a stable pattern of male migration towards South African mines, that has resulted in an a steady increase of women-headed households (both de jure and de facto). The relationship between land reform, agricultural development, poverty alleviation and gender equality in Southern Africa is long debated in academic literature. Within this debate, the role of agricultural activities in irrigation schemes is particularly interesting considering that, in a drought-prone area, having access to water for irrigation means increased possibilities of improving food and livelihood security, and income levels. In the case of Chókwè, local governments institutions are endorsing the development of commercial agriculture through initiatives such as partnerships with international cooperation agencies or joint-ventures with private investors. While these business models can sometimes lead to positive outcomes in terms of poverty alleviation, it is important to recognize that decentralization and neoliberal reforms occur in the context of financial and political crisis of the State that lacks the resources to efficiently manage infrastructures such as irrigation systems. This kind of institutional and economic reforms risk accelerating processes of social and economic marginalisation, including landlessness, in particular for poor rural women that mainly use irrigated land for subsistence production. The study combines an analysis of the historical and geographical context with the study of relevant literature and original fieldwork. Fieldwork was conducted between February and June 2007 (where I mainly collected secondary data, maps and statistics and conducted preliminary visit to Chókwè) and from October 2007 to March 2008. Fieldwork methodology was qualitative and used semi-structured interviews with central and local Government officials, technical experts of the irrigation scheme, civil society organisations, international NGOs, rural extensionists, and water users from the irrigation scheme, in particular those women smallfarmers members of local farmers’ associations. Thanks to the collaboration with the Union of Farmers’ Associations of Chókwè, she has been able to participate to members’ meeting, to education and training activities addressed to women farmers members of the Union and to organize a group discussion. In Chókwè irrigation scheme, women account for the 32% of water users of the familiar sector (comprising plot-holders with less than 5 hectares of land) and for just 5% of the private sector. If one considers farmers’ associations of the familiar sector (a legacy of Frelimo’s cooperatives), women are 84% of total members. However, the security given to them by the land title that they have acquired through occupation is severely endangered by the use that they make of land, that is considered as “non efficient” by the irrigation scheme authority. Due to a reduced access to marketing possibilities and to inputs, training, information and credit women, in actual fact, risk to see their right to access land and water revoked because they are not able to sustain the increasing cost of the water fee. The myth of the “efficient producer” does not take into consideration the characteristics of inequality and gender discrimination of the neo-liberal market. Expecting small-farmers, and in particular women, to be able to compete in the globalized agricultural market seems unrealistic, and can perpetuate unequal gendered access to resources such as land and water.
Resumo:
ZusammenfassungDie Resonanzionisationsmassenspektrometrie (RIMS) verbindet hohe Elementselektivität mit guter Nachweiseffizienz. Aufgrund dieser Eigenschaften ist die Methode für Ultraspurenanalyse und Untersuchungen an seltenen oder schwer handhabbaren Elementen gut geeignet. Für RIMS werden neutrale Atome mit monochromatischem Laserlicht ein- oder mehrfach resonant auf energetisch hoch liegende Niveaus angeregt und anschließend durch einen weiteren Laserstrahl oder durch ein elektrisches Feld ionisiert. Die Photoionen werden in einem Massenspektrometer massenselektiv registriert.Ein Beispiel für die Anwendung von RIMS ist die präzise Bestimmung der Ionisationsenergie als fundamentale physikalisch-chemische Eigenschaft eines bestimmten Elements; insbesondere bei den Actinoiden ist die Kenntnis der Ionisationsenergie von Interesse, da es dort bis zur Anwendung der laser-massenspektroskopischer Methode nur wenige experimentelle Daten gab. Die Bestimmung der Ionisationsenergie erfolgt durch die Methode der Photoionisation im elektrischen Feld gemäß dem klassischen Sattelpunktsmodell. Im Experiment werden neutrale Atome in einem Atomstrahl mittels Laserlicht zunächst resonant angeregt. Die angeregten Atome befinden sich in einem äußeren, statischen elektrischen Feld und werden durch einen weiteren Laserstrahl, dessen Wellenlänge durchgestimmt wird, ionisiert. Das Überschreiten der Laserschwelle macht sich durch einen starken Anstieg im Ionensignal bemerkbar. Man führt diese Messung bei verschiedenen elektrischen Feldstärken durch und erhält bei Auftragen der Ionisationsschwellen gegen die Wurzel der elektrischen Feldstärke durch Extrapolation auf die Feldstärke Null die Ionisationsenergie.Im Rahmen dieser Arbeit wurde die Ionisationsenergie von Actinium erstmalig zu 43398(3) cm-1 º 5,3807(4) eV experimentell bestimmt. Dazu wurden Actiniumatome zunächst einstufig resonant mit einem Laser mit einer Wellenlänge von 388,67 nm auf einen Zustand bei 25729,03 cm-1 angeregt und anschließend mit Laserlicht mit einer Wellenlänge von ca. 568 nm ionisiert. Damit sind die Ionisationsenergien aller Actinoiden bis einschließlich Einsteinium mit Ausnahme von Protactinium bekannt. Als Atomstrahlquelle wird ein spezielles 'Sandwichfilament' benutzt, bei dem das Actinoid als Hydroxid auf eine Tantalfolie aufgebracht und mit einer reduzierenden Deckschicht überzogen wird. Das Actinoid dampft bei Heizen dieser Anordnung atomar ab. Bei den schwereren Actinoiden wurde Titan als Deckschicht verwendet. Um einen Actiniumatomstrahl zu erzeugen, wurde aufgrund der hohen Abdampftemperaturen statt Titan erstmals Zirkonium eingesetzt. Bei Protactinium wurde Thorium, welches noch stärkere Reduktionseigenschaften aufweist, als Deckmaterial eingesetzt. Trotzdem gelang es mit der 'Sandwichtechnik' nicht, einen Protactiniumatomstrahl zu erzeugen. In der Flugzeitapparatur wurde lediglich ein Protactinium-monoxidionensignal detektiert. Um ein erst seit kurzem verfügbares Fest-körperlasersystem zu explorieren, wurden zusätzlich noch die bekannten Ionisations-ener-gien von Gadolinium und Plutonium erneut bestimmt. Die gemessenen Werte stimmen mit Literaturdaten gut überein.Ferner wurde noch ein bestehender Trennungsgang für Plutonium aus Umweltproben auf die Matrices Meerwasser und Hausstaub angepasst und für die Bestimmung von Plutonium und dessen Isotopenzusammensetzung in verschiedenen Probenreihen mittels RIMS eingesetzt. Der modifizierte Trennungsgang ermöglicht das schnelle Aufarbeiten von großen Probenmengen für Reihenuntersuchungen von Plutoniumkontaminationen. Die ermittelten Gehalten an 239Pu lagen zwischen 8,2*107 Atome pro 10 l Meerwasserprobe und 1,7*109Atome pro Gramm Staubprobe.
Resumo:
Recent developments in the theory of plasma-based collisionally excited x-ray lasers (XRL) have shown an optimization potential based on the dependence of the absorption region of the pumping laser on its angle of incidence on the plasma. For the experimental proof of this idea, a number of diagnostic schemes were developed, tested, qualified and applied. A high-resolution imaging system, yielding the keV emission profile perpendicular to the target surface, provided positions of the hottest plasma regions, interesting for the benchmarking of plasma simulation codes. The implementation of a highly efficient spectrometer for the plasma emission made it possible to gain information about the abundance of the ionization states necessary for the laser action in the plasma. The intensity distribution and deflection angle of the pump laser beam could be imaged for single XRL shots, giving access to its refraction process within the plasma. During a European collaboration campaign at the Lund Laser Center, Sweden, the optimization of the pumping laser incidence angle resulted in a reduction of the required pumping energy for a Ni-like Mo XRL, which enabled the operation at a repetition rate of 10 Hz. Using the experiences gained there, the XRL performance at the PHELIX facility, GSI Darmstadt with respect to achievable repetition rate and at wavelengths below 20 nm was significantly improved, and also important information for the development towards multi-100 eV plasma XRLs was acquired. Due to the setup improvements achieved during the work for this thesis, the PHELIX XRL system now has reached a degree of reproducibility and versatility which is sufficient for demanding applications like the XRL spectroscopy of heavy ions. In addition, a European research campaign, aiming towards plasma XRLs approaching the water-window (wavelengths below 5 nm) was initiated.
Resumo:
The only nuclear model independent method for the determination of nuclear charge radii of short-lived radioactive isotopes is the measurement of the isotope shift. For light elements (Z < 10) extremely high accuracy in experiment and theory is required and was only reached for He and Li so far. The nuclear charge radii of the lightest elements are of great interest because they have isotopes which exhibit so-called halo nuclei. Those nuclei are characterized by a a very exotic nuclear structure: They have a compact core and an area of less dense nuclear matter that extends far from this core. Examples for halo nuclei are 6^He, 8^He, 11^Li and 11^Be that is investigated in this thesis. Furthermore these isotopes are of interest because up to now only for such systems with a few nucleons the nuclear structure can be calculated ab-initio. In the Institut für Kernchemie at the Johannes Gutenberg-Universität Mainz two approaches with different accuracy were developed. The goal of these approaches was the measurement of the isotope shifts between (7,10,11)^Be^+ and 9^Be^+ in the D1 line. The first approach is laser spectroscopy on laser cooled Be^+ ions that are trapped in a linear Paul trap. The accessible accuracy should be in the order of some 100 kHz. In this thesis two types of linear Paul traps were developed for this purpose. Moreover, the peripheral experimental setup was simulated and constructed. It allows the efficient deceleration of fast ions with an initial energy of 60 keV down to some eV and an effcient transport into the ion trap. For one of the Paul traps the ion trapping could already be demonstrated, while the optical detection of captured 9^Be^+ ions could not be completed, because the development work was delayed by the second approach. The second approach uses the technique of collinear laser spectroscopy that was already applied in the last 30 years for measuring isotope shifts of plenty of heavier isotopes. For light elements (Z < 10), it was so far not possible to reach the accuracy that is required to extract information about nuclear charge radii. The combination of collinear laser spectroscopy with the most modern methods of frequency metrology finally permitted the first-time determination of the nuclear charge radii of (7,10)^Be and the one neutron halo nucleus 11^Be at the COLLAPS experiment at ISOLDE/ CERN. In the course of the work reported in this thesis it was possible to measure the absolute transition frequencies and the isotope shifts in the D1 line for the Be isotopes mentioned above with an accuracy of better than 2 MHz. Combination with the most recent calculations of the mass effect allowed the extraction of the nuclear charge radii of (7,10,11)^Be with an relative accuracy better than 1%. The nuclear charge radius decreases from 7^Be continuously to 10^Be and increases again for 11^Be. This result is compared with predictions of ab-initio nuclear models which reproduce the observed trend. Particularly the "Greens Function Monte Carlo" and the "Fermionic Molecular Dynamic" model show very good agreement.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.
Resumo:
Spinpolarisationsmessungen an freien Elektronen sind seit ihrer ersten Verwirklichung durch Mott anspruchsvoll geblieben. Die relevante Größe eines Spinpolarimeters ist seine Gütefunktion, FoM = S^2*I/I_0, mit der Asymmetriefunktion S und dem Verhältnis aus Streu- und Primärintensität I/I_0. Alle bisherigen Geräte basieren auf einer einkanaligen Streuung (Spin-Bahn- oder Austauschwechselwirkung), die durch FoM = 10^(-4) charakterisiert ist. Moderne Halbkugelanalysatoren ermöglichen hingegen eine effiziente Vielkanaldetektion der spinintegralen Intensität mit mehr als 10^4 simultan erfassten Datenpunkten. Im Vergleich zwischen spinaufgelöster und spinintegraler Elektronenspektroskopie findet man somit einen Effizienzunterschied von 8 Größenordnungen.rnDie vorliegende Arbeit beschäftigt sich mit der Entwicklung und Untersuchung eines neuartigen Verfahrens zur Effizienzsteigerung in der spinaufgelösten Elektronenspektroskopie unter Ausnutzung von Vielkanaldetektion. Der Spindetektor wurde in eine µ-metallgeschirmte UHV-Kammer integriert und hinter einem konventionellen Halbkugelanalysator angebracht. Durch elektronenoptische Simulationen wurde die Geometrie des elektrostatischen Linsensystems festgelegt. Das Grundkonzept basiert auf der k(senkrecht)-erhaltenden elastischen Streuung des (0,0)-Spekularstrahls an einem W(100)- Streukristall unter einem Einfallswinkel von 45°. Es konnte gezeigt werden, dass etwarn960 Datenpunkte (15 Energie- und 64 Winkelpunkte) in einem Energiebereich vonrnetwa 3 eV simultan auf einen Delayline-Detektor abgebildet werden können. Dies führt zu einer zweidimensionalen Gütefunktion von FoM_2D = 1,7. Verglichen mit konventionellen Spindetektoren ist der neuartige Ansatz somit durch einen Effizienzgewinn von 4 Größenordnungen gekennzeichnet.rnDurch Messungen einer Fe/MgO(100) und O p(1x1)/Fe(100)-Probe konnte die Funktionstüchtigkeit des neuen Spinpolarimeters nachgewiesen werden, indem die aus der Literatur bekannten typischen UPS-Ergebnisse mit stark verkürzter Messzeit reproduziert wurden. Durch die hohe Effizienz ist es möglich, besonders reaktive Oberflächen in kurzer Zeit zu vermessen. Dieser Vorteil wurde bereits für eine erste grundlagenorientierte Anwendung genutzt: Als Test für die Gültigkeit von Bandstrukturrechnungen für die Heusler-Verbindung Co_2MnGa, wobei eine gute Übereinstimmung zwischen Theorie und Experiment gefunden wurde.rnMit dem Vielkanal-Spinfilter wurde die Grundlage zu einem um Größenordnungen verbesserten Messverfahren der Elektronenspin-Polarisation geschaffen und damit der Zugang zu Experimenten eröffnet, die mit den bisherigen einkanaligen Detektoren nicht möglich sind.rn
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Dendritic cells (DCs) are the most potent cell type for capture, processing, and presentation of antigens. They are able to activate naïve T cells as well as to initiate memory T-cell immune responses. T lymphocytes are key elements in eliciting cellular immunity against bacteria and viruses as well as in the generation of anti-tumor and anti-leukemia immune responses. Because of their central position in the immunological network, specific manipulations of these cell types provide promising possibilities for novel immunotherapies. Nanoparticles (NP) that have just recently been investigated for use as carriers of drugs or imaging agents, are well suited for therapeutic applications in vitro and also in vivo since they can be addressed to cells with a high target specificity upon surface functionalization. As a first prerequisite, an efficient in vitro labeling of cells with NP has to be established. In this work we developed protocols allowing an effective loading of human monocyte-derived DCs and primary antigen-specific T cells with newly designed NP without affecting biological cell functions. Polystyrene NP that have been synthesized by the miniemulsion technique contained perylenmonoimide (PMI) as a fluorochrome, allowing the rapid determination of intracellular uptake by flow cytometry. To confirm intracellular localization, NP-loaded cells were analyzed by confocal laser scanning microscopy (cLSM) and transmission electron microscopy (TEM). Functional analyses of NP-loaded cells were performed by IFN-γ ELISPOT, 51Chromium-release, and 3H-thymidine proliferation assays. In the first part of this study, we observed strong labeling of DCs with amino-functionalized NP. Even after 8 days 95% of DCs had retained nanoparticles with a median fluorescence intensity of 67% compared to day 1. NP loading did not influence expression of cell surface molecules that are specific for mature DCs (mDCs) nor did it influence the immunostimulatory capacity of mDCs. This procedure did also not impair the capability of DCs for uptake, processing and presentation of viral antigens that has not been shown before for NP in DCs. In the second part of this work, the protocol was adapted to the very different conditions with T lymphocytes. We used leukemia-, tumor-, and allo-human leukocyte antigen (HLA) reactive CD8+ or CD4+ T cells as model systems. Our data showed that amino-functionalized NP were taken up very efficiently also by T lymphocytes, which usually had a lower capacity for NP incorporation compared to other cell types. In contrast to DCs, T cells released 70-90% of incorporated NP during the first 24 h, which points to the need to escape from intracellular uptake pathways before export to the outside can occur. Preliminary data with biodegradable nanocapsules (NC) revealed that encapsulated cargo molecules could, in principle, escape from the endolysosomal compartment after loading into T lymphocytes. T cell function was not influenced by NP load at low to intermediate concentrations of 25 to 150 μg/mL. Overall, our data suggest that NP and NC are promising tools for the delivery of drugs, antigens, and other molecules into DCs and T lymphocytes.