876 resultados para Recall-Precision Curves
Resumo:
A rapid, sensitive and specific method for quantifying hydroxocobalamin in human plasma using paracetamol as the internal standard (IS) is described. The analyte and the IS were extracted from plasma by liquid-liquid extraction using an organic solvent (ethanol 100%; -20°C). The extracts were analyzed by high performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS-MS). Chromatography was performed on Prevail C8 3 μm, analytical column (2.1×100 mm i.d.). The method had a chromatographic run time of 3.4 min and a linear calibration curve over the range 5-400 ng.mL-1 (r>0.9983). The limit of quantification was 5 ng.mL-1. The method was also validated without the use of the internal standard. The precision in the intra-batch validation with IS was 9.6%, 8.9%, 1.0% and 2.8% whereas without IS was 9.2%, 8.2%, 1.8% and 1.5% for 5, 15, 80 and 320 ng/mL, respectively. The accuracy in intra-batch validation with IS was 108.9%, 99.9%, 98.9% and 99.0% whereas without IS was 101.1%, 99.3%, 97.5% and 92.5% for 5, 15, 80 and 320 ng/mL, respectively. The precision in the inter-batch validation with IS was 9.4%, 6.9%, 4.6% and 5.5% whereas without IS was 10.9%, 6.4%, 5.0% and 6.2% for 5, 15, 80 and 320 ng/mL, respectively. The accuracy in inter-batch validation with IS was 101.9%, 104.1%, 103.2% and 99.7% whereas without IS was 94.4%, 101.2%, 101.6% and 96.0% for 5, 15, 80 and 320 ng/mL, respectively. This HPLC-MS-MS procedure was used to assess the pharmacokinetics of Hydroxo cobalamin following intramuscular injection 5000 μg in healthy volunteers of both sexes (10 males and 10 females). The volunteers had the following clinical characteristics (according to gender and expressed as mean ± SD [range]): males: age: 32.40 ± 8.00 y [23.00-46.00], height: 1.73 ± 0.07 m [1.62-1.85], body weight: 72.48 ± 10.22 Kg [60.20- 88.00]; females: age: 28.60 ± 9.54 y [18.00-44.00], height: 1.60 ± 0.05 m [1.54-1.70], body weight: 58.64 ± 6.09 Kg [51.70- 66.70]. The following pharmacokinetic parameters were obtained from the hydroxocobalamin plasma concentration vs. time curves: AUClast, T1/2, Tmax, Vd, Cl, Cmax and Clast. The pharmacokinetic parameters were 120 (± 25) ng/mL for Cmax, 2044 (± 641) ng.h/mL for AUClast, 8 (± 3.2) ng.mL-1 for Clast, 38 (± 15.8) hr for T1/2 and 2.5 (range 1-6) hr for Tmax. Female volunteers presented significant (p=0.0136) lower AUC (1706 ± 704) ng.h/mL) and larger (p=0.0205) clearance (2.91 ± 1.41 L/hr), as compared to male 2383 ± 343 ng.h/mL and 1.76 ± 0.23 L/hr, respectively. These pharmacokinetic differences could explain the higher prevalence of vitamin B12 deficiency in female patients. The method described validated well without the use of the internal standard and this approach should be investigated in other HPLC-MS-MS methods.
Resumo:
This work presents an investigation of the ductile tearing properties for a girth weld made of an API 5L X80 pipeline steel using experimentally measured crack growth resistance curves. Use of these materials is motivated by the increasing demand in the number of applications for manufacturing high strength pipes for the oil and gas industry including marine applications and steel catenary risers. Testing of the pipeline girth welds employed side-grooved, clamped SE(T) specimens and shallow crack bend SE(B) specimens with a weld centerline notch to determine the crack growth resistance curves based upon the unloading compliance (UC) method using the single specimen technique. Recently developed compliance functions and η-factors applicable for SE(T) and SE(B) fracture specimens with homogeneous material and overmatched welds are introduced to determine crack growth resistance data from laboratory measurements of load-displacement records.
Resumo:
Hierarchical multi-label classification is a complex classification task where the classes involved in the problem are hierarchically structured and each example may simultaneously belong to more than one class in each hierarchical level. In this paper, we extend our previous works, where we investigated a new local-based classification method that incrementally trains a multi-layer perceptron for each level of the classification hierarchy. Predictions made by a neural network in a given level are used as inputs to the neural network responsible for the prediction in the next level. We compare the proposed method with one state-of-the-art decision-tree induction method and two decision-tree induction methods, using several hierarchical multi-label classification datasets. We perform a thorough experimental analysis, showing that our method obtains competitive results to a robust global method regarding both precision and recall evaluation measures.
Resumo:
We present a new approach to perform calculations with the certain standard classes in cohomology of the moduli spaces of curves. It is based on an important lemma of Ionel relating the intersection theoriy of the moduli space of curves and that of the space of admissible coverings. As particular results, we obtain expressions of Hurwitz numbers in terms of the intersections in the tautological ring, expressions of the simplest intersection numbers in terms of Hurwitz numbers, an algorithm of calculation of certain correlators which are the subject of the Witten conjecture, an improved algorithm for intersections related to the Boussinesq hierarchy, expressions for the Hodge integrals over two-pointed ramification cycles, cut-and-join type equations for a large class of intersection numbers, etc.
Resumo:
A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.
Resumo:
Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.
Resumo:
Ren and colleagues (2006) found that saccades to visual targets became less accurate when somatosensory information about hand location was added, suggesting that saccades rely mainly on vision. We conducted two kinematic experiments to examine whether or not reaching movements would also show such strong reliance on vision. In Experiment 1, subjects used their dominant right hand to perform reaches, with or without a delay, to an external visual target or to their own left fingertip positioned either by the experimenter or by the participant. Unlike saccades, reaches became more accurate and precise when proprioceptive information was available. In Experiment 2, subjects reached toward external or bodily targets with differing amounts of visual information. Proprioception improved performance only when vision was limited. Our results indicate that reaching movements, unlike saccades, are improved rather than impaired by the addition of somatosensory information.
Resumo:
Precision horticulture and spatial analysis applied to orchards are a growing and evolving part of precision agriculture technology. The aim of this discipline is to reduce production costs by monitoring and analysing orchard-derived information to improve crop performance in an environmentally sound manner. Georeferencing and geostatistical analysis coupled to point-specific data mining allow to devise and implement management decisions tailored within the single orchard. Potential applications range from the opportunity to verify in real time along the season the effectiveness of cultural practices to achieve the production targets in terms of fruit size, number, yield and, in a near future, fruit quality traits. These data will impact not only the pre-harvest but their effect will extend to the post-harvest sector of the fruit chain. Chapter 1 provides an updated overview on precision horticulture , while in Chapter 2 a preliminary spatial statistic analysis of the variability in apple orchards is provided before and after manual thinning; an interpretation of this variability and how it can be managed to maximize orchard performance is offered. Then in Chapter 3 a stratification of spatial data into management classes to interpret and manage spatial variation on the orchard is undertaken. An inverse model approach is also applied to verify whether the crop production explains environmental variation. In Chapter 4 an integration of the techniques adopted before is presented. A new key for reading the information gathered within the field is offered. The overall goal of this Dissertation was to probe into the feasibility, the desirability and the effectiveness of a precision approach to fruit growing, following the lines of other areas of agriculture that already adopt this management tool. As existing applications of precision horticulture already had shown, crop specificity is an important factor to be accounted for. This work focused on apple because of its importance in the area where the work was carried out, and worldwide.
Resumo:
«Fiction of frontier». Phenomenology of an open form/voice. Francesco Giustini’s PhD dissertation fits into a genre of research usually neglected by the literary criticism which nevertheless is arousing much interest in recent years: the relationship between Literature and Space. In this context, the specific issue of his work consists in the category of the Frontier including its several implications for the XX century fiction. The preliminary step, at the beginning of the first section of the dissertation, is a semantic analysis: with precision, Giustini describes the meaning of the word “frontier” here declined in a multiplicity of cultural, political and geographical contexts, starting from the American frontier of the pioneers who headed for the West, to the exotic frontiers of the world, with whose the imperialistic colonization has come into contact; from the semi-uninhabited areas like deserts, highlands and virgin forests, to the ethnic frontiers between Indian and white people in South America, since the internal frontiers of the Countries like those ones between the District and the Capital City, the Centre and the Outskirts. In the next step, Giustini wants to focus on a real “ myth of the frontier”, able to nourish cultural and literary imagination. Indeed, the literature has told and chosen the frontier as the scenery for many stories; especially in the 20th Century it made the frontier a problematic space in the light of events and changes that have transformed the perception of space and our relationship with it. Therefore, the dissertation proposes a critical category, it traces the hallmarks of a specific literary phenomenon defined “ Fiction of the frontier” ,present in many literary traditions during the 20th Century. The term “Fiction” (not “Literature” or “Poetics”) does not define a genre but rather a “procedure”, focusing on a constant issue pointed out from the texts examined in this work : the strong call to the act of narration and to its oral traditions. The “Fiction of the Frontier” is perceived as an approach to the world, a way of watching and feeling the objects, an emotion that is lived and told through the story- a story where the narrator ,through his body and his voice, takes the rule of the witness. The following parts, that have an analytic style, are constructed on the basis of this theoretical and methodological reflection. The second section gives a wide range of examples into we can find the figure and the myth of the frontier through the textual analysis which range over several literary traditions. Starting from monographic chapters (Garcia Marquez, Callado, McCarthy), to the comparative reading of couples of texts (Calvino and Verga Llosa, Buzzati and Coetzee, Arguedas and Rulfo). The selection of texts is introduced so as to underline a particular aspect or a form of the frontier at every reading. This section is articulated into thematic voices which recall some actions that can be taken into the ambiguous and liminal space of the frontier (to communicate, to wait, to “trans-culturate”, to imagine, to live in, to not-live in). In this phenomenology, the frontier comes to the light as a physical and concrete element or as a cultural, imaginary, linguistic, ethnic and existential category. In the end, the third section is centered on a more defined and elaborated analysis of two authors, considered as fundamental for the comprehension of the “Fiction of the frontier”: Joseph Conrad and João Guimarães Rosa. Even if they are very different, being part of unlike literary traditions, these two authors show many connections which are pointed by the comparative analysis. Maybe Conrad is the first author that understand the feeling of the frontier , freeing himself from the adventure romance and from the exotic nineteenthcentury tradition. João Guimarães Rosa, in his turn, is the great narrator of Brazilian and South American frontier, he is the man of sertão and of endless spaces of the Centre of Brazil. His production is strongly linked to that one belonged to the author of Heart of Darkness.
Resumo:
In dieser Arbeit wurden erstmalig orts- und energieaufgelöste Untersuchungen der ferroelektrischen Elektronenemission (FEE) durchgeführt. Als Modellsystem diente Triglyzinsulfat (TGS). Als spektromikroskopische Methode kam die Emissions-Elektronenmikroskopie zum Einsatz. Typische Schaltfelder betrugen 2 kV/mm, angelegt wurde eine sinusoïdale Wechselspannung mit 300 Hz. Die Temperatur, bei der die FEE verschwindet (32°C), liegt unterhalb der Curie-Temperatur des TGS (TC=49°C). Dieser Unterschied kann auf den Einfluss des Extraktionsfeldes des Emissions-Elektronenmikroskops (1 kV/mm) zurückgeführt werden. Oberhalb der Curie-Temperatur konnte keine Emission beobachtet werden. Die Elektroden vor und nach der Messung waren identisch, d.h. nicht zerstört, wie man es erwarten würde, wenn ein Oberflächenplasma gezündet wurde. Bei ca. 150 V/mm beginnt die Intensität der beobachteten Emission Schwankungen aufzuweisen. Dies könnte die Ursache in dem Einsatz von ersten Zündungen eines Mikroplasmas mit destruktiver Wirkung haben. Die ortsintegrierte Energieverteilung weist bei Spannungsamplituden bis 300 V zwei Maxima auf. Dies deutet auf zwei Emissionsmechanismen hin, einen sekundären (ca. 10 eV) und einen primären (ca. 13 bis 45 eV) Effekt. Die Hochenergie-Abschneidekanten korrelieren im Bereich bis 200 V bis auf wenige eV mit der angelegten Spannungsamplitude. Die Messung der ortsaufgelösten Energieverteilung zeigt, dass die primäre Emission aus den Bereichen ohne Elektrode stammt. Sie wird der FEE zugeschrieben. Diese Elektronen können– auf Grund der lokalen Felder – auf die Elektroden beschleunigt werden und hier sekundäre Prozesse auslösen (niederenergetischer Bereich des Spektrums). Dies wird durch die lokalen Spektren dieser Bereiche bestätigt.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
Die vorliegende Arbeit befasst sich mit der Entwicklung und dem Aufbau eines Experiments zur hochpräzisen Bestimmung des g-Faktors gebundener Elektronen in hochgeladenen Ionen. Der g-Faktor eines Teilchens ist eine dimensionslose Konstante, die die Stärke der Wechselwirkung mit einem magnetischen Feld beschreibt. Im Falle eines an ein hochgeladenes Ion gebundenen Elektrons, dient es als einer der genausten Tests der Quantenelektrodynamik gebundener Zustande (BS-QED). Die Messung wird in einem dreifach Penning-Fallen System durchgeführt und basiert auf dem kontinuierlichen Stern-Gerlach-Effekt. Der erste Teil dieser Arbeit gibt den aktuellen Wissensstand über magnetische Momente wieder. Der hier gewählte experimentelle Aufbau wird begründet. Anschließend werden die experimentellen Anforderungen und die verwendeten Messtechniken erläutert. Das Ladungsbrüten der Ionen - einer der wichtigsten Aufgaben dieser Arbeit - ist dargestellt. Seine Realisierung basiert auf einer Feld-Emissions-Spitzen-Anordnung, die die Messung des Wirkungsquerschnitts für Elektronenstoßionisation ermöglicht. Der letzte Teil der Arbeit widmet sich der Entwicklung und dem Aufbau des Penning-Fallen Systems, sowie der Implementierung des Nachweisprozesses. Gegenwärtig ist der Aufbau zur Erzeugung hochgeladener Ionen und der dazugehörigen Messung des g-Faktors abgeschlossen, einschließlich des Steuerprogramms für die erste Datennahme. Die Ionenerzeugung und das Ladungsbrüten werden die nächsten Schritte sein.
Resumo:
Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.
Resumo:
Diese Arbeit besch"aftigt sich mit algebraischen Zyklen auf komplexen abelschen Variet"aten der Dimension 4. Ziel der Arbeit ist ein nicht-triviales Element in $Griff^{3,2}(A^4)$ zu konstruieren. Hier bezeichnet $A^4$ die emph{generische} abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$. Die ersten drei Kapitel sind eine Wiederholung von elementaren Definitionen und Begriffen und daher eine Festlegung der Notation. In diesen erinnern wir an elementare Eigenschaften der von Saito definierten Filtrierungen $F_S$ und $Z$ auf den Chowgruppen (vgl. cite{Sa0} und cite{Sa}). Wir wiederholen auch eine Beziehung zwischen der $F_S$-Filtrierung und der Zerlegung von Beauville der Chowgruppen (vgl. cite{Be2} und cite{DeMu}), welche aus cite{Mu} stammt. Die wichtigsten Begriffe in diesem Teil sind die emph{h"ohere Griffiths' Gruppen} und die emph{infinitesimalen Invarianten h"oherer Ordnung}. Dann besch"aftigen wir uns mit emph{verallgemeinerten Prym-Variet"aten} bez"uglich $(2:1)$ "Uberlagerungen von Kurven. Wir geben ihre Konstruktion und wichtige geometrische Eigenschaften und berechnen den Typ ihrer Polarisierung. Kapitel ref{p-moduli} enth"alt ein Resultat aus cite{BCV} "uber die Dominanz der Abbildung $p(3,2):mathcal R(3,2)longrightarrow mathcal A_4(1,2,2,2)$. Dieses Resultat ist von Relevanz f"ur uns, weil es besagt, dass die generische abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$ eine verallgemeinerte Prym-Variet"at bez"uglich eine $(2:1)$ "Uberlagerung einer Kurve vom Geschlecht $7$ "uber eine Kurve vom Geschlecht $3$ ist. Der zweite Teil der Dissertation ist die eigentliche Arbeit und ist auf folgende Weise strukturiert: Kapitel ref{Deg} enth"alt die Konstruktion der Degeneration von $A^4$. Das bedeutet, dass wir in diesem Kapitel eine Familie $Xlongrightarrow S$ von verallgemeinerten Prym-Variet"aten konstruieren, sodass die klassifizierende Abbildung $Slongrightarrow mathcal A_4(1,2,2,2)$ dominant ist. Desweiteren wird ein relativer Zykel $Y/S$ auf $X/S$ konstruiert zusammen mit einer Untervariet"at $Tsubset S$, sodass wir eine explizite Beschreibung der Einbettung $Yvert _Thookrightarrow Xvert _T$ angeben k"onnen. Das letzte und wichtigste Kapitel enth"ahlt Folgendes: Wir beweisen dass, die emph{ infinitesimale Invariante zweiter Ordnung} $delta _2(alpha)$ von $alpha$ nicht trivial ist. Hier bezeichnet $alpha$ die Komponente von $Y$ in $Ch^3_{(2)}(X/S)$ unter der Beauville-Zerlegung. Damit und mit Hilfe der Ergebnissen aus Kapitel ref{Cohm} k"onnen wir zeigen, dass [ 0neq [alpha ] in Griff ^{3,2}(X/S) . ] Wir k"onnen diese Aussage verfeinern und zeigen (vgl. Theorem ref{a4}) begin{theorem}label{maintheorem} F"ur $sin S$ generisch gilt [ 0neq [alpha _s ]in Griff ^{3,2}(A^4) , ] wobei $A^4$ die generische abelsche Variet"at der Dimension $4$ mit Polarisierung vom Typ $(1,2,2,2)$ ist. end{theorem}
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.