880 resultados para Analytical geometry
Resumo:
Ultrahochfester Beton besitzt aufgrund seiner Zusammensetzung eine sehr hohe Druckfestigkeit von 150 bis über 200 N/mm² und eine außergewöhnlich hohe Dichtigkeit. Damit werden Anwendungen in stark belasteten Bereichen und mit hohen Anforderungen an die Dauerhaftigkeit des Materials ermöglicht. Gleichzeitig zeigt ultrahochfester Beton bei Erreichen seiner Festigkeit ein sehr sprödes Verhalten. Zur Verhinderung eines explosionsartigen Versagens werden einer UHPC-Mischung Fasern zugegeben oder wird eine Umschnürung mit Stahlrohren ausgebildet. Die Zugabe von Fasern zur Betonmatrix beeinflusst neben der Verformungsfähigkeit auch die Tragfähigkeit des UHPC. Das Versagen der Fasern ist abhängig von Fasergeometrie, Fasergehalt, Verbundverhalten sowie Zugfestigkeit der Faser und gekennzeichnet durch Faserauszug oder Faserreißen. Zur Sicherstellung der Tragfähigkeit kann daher auf konventionelle Bewehrung außer bei sehr dünnen Bauteilen nicht verzichtet werden. Im Rahmen des Schwerpunktprogramms SPP 1182 der Deutschen Forschungsgemeinschaft (DFG) wurden in dem dieser Arbeit zugrunde liegenden Forschungsprojekt die Fragen nach der Beschreibung des Querkrafttragverhaltens von UHPC-Bauteilen mit kombinierter Querkraftbewehrung und der Übertragbarkeit bestehender Querkraftmodelle auf UHPC untersucht. Neben einer umfassenden Darstellung vorhandener Querkraftmodelle für Stahlbetonbauteile ohne Querkraftbewehrung und mit verschiedenen Querkraftbewehrungsarten bilden experimentelle Untersuchungen zum Querkrafttragverhalten an UHPC-Balken mit verschiedener Querkraftbewehrung den Ausgangspunkt der vorliegenden Arbeit. Die experimentellen Untersuchungen beinhalteten zehn Querkraftversuche an UHPC-Balken. Diese Balken waren in Abmessungen und Biegezugbewehrung identisch. Sie unterschieden sich nur in der Art der Querkraftbewehrung. Die Querkraftbewehrungsarten umfassten eine Querkraftbewehrung aus Stahlfasern oder Vertikalstäben, eine kombinierte Querkraftbewehrung aus Stahlfasern und Vertikalstäben und einen Balken ohne Querkraftbewehrung. Obwohl für die in diesem Projekt untersuchten Balken Fasergehalte gewählt wurden, die zu einem entfestigenden Nachrissverhalten des Faserbetons führten, zeigten die Balkenversuche, dass die Zugabe von Stahlfasern die Querkrafttragfähigkeit steigerte. Durch die gewählte Querkraftbewehrungskonfiguration bei ansonsten identischen Balken konnte außerdem eine quantitative Abschätzung der einzelnen Traganteile aus den Versuchen abgeleitet werden. Der profilierte Querschnitt ließ einen großen Einfluss auf das Querkrafttragverhalten im Nachbruchbereich erkennen. Ein relativ stabiles Lastniveau nach Erreichen der Höchstlast konnte einer Vierendeelwirkung zugeordnet werden. Auf Basis dieser Versuchsergebnisse und analytischer Überlegungen zu vorhandenen Querkraftmodellen wurde ein additiver Modellansatz zur Beschreibung des Querkrafttragverhaltens von UHPCBalken mit einer kombinierten Querkraftbewehrung aus Stahlfasern und Vertikalstäben formuliert. Für die Formulierung der Traganteile des Betonquerschnitts und der konventionellen Querkraftbewehrung wurden bekannte Ansätze verwendet. Für die Ermittlung des Fasertraganteils wurde die Faserwirksamkeit zugrunde gelegt. Das Lastniveau im Nachbruchbereich aus Viendeelwirkung ergibt sich aus geometrischen Überlegungen.
Resumo:
The report addresses the problem of visual recognition under two sources of variability: geometric and photometric. The geometric deals with the relation between 3D objects and their views under orthographic and perspective projection. The photometric deals with the relation between 3D matte objects and their images under changing illumination conditions. Taken together, an alignment-based method is presented for recognizing objects viewed from arbitrary viewing positions and illuminated by arbitrary settings of light sources.
Resumo:
Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.
Resumo:
Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power of the cache subsystems is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. In this project, we propose an analytical cache model that succinctly captures the miss performance of an application over the entire cache parameter space. Unlike exhaustive trace driven simulation, our model requires that the program be simulated once so that a few key characteristics can be obtained. Using these application-dependent characteristics, the model can span the entire cache parameter space consisting of cache sizes, associativity and cache block sizes. In our unified model, we are able to cater for direct-mapped, set and fully associative instruction, data and unified caches. Validation against full trace-driven simulations shows that our model has a high degree of fidelity. Finally, we show how the model can be coupled with a power model for caches such that one can very quickly decide on pareto-optimal performance-power design points for rapid design space exploration.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
A novel metric comparison of the appendicular skeleton (fore and hind limb) of different vertebrates using the Compositional Data Analysis (CDA) methodological approach it’s presented. 355 specimens belonging in various taxa of Dinosauria (Sauropodomorpha, Theropoda, Ornithischia and Aves) and Mammalia (Prothotheria, Metatheria and Eutheria) were analyzed with CDA. A special focus has been put on Sauropodomorpha dinosaurs and the Aitchinson distance has been used as a measure of disparity in limb elements proportions to infer some aspects of functional morphology
Resumo:
Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples
Resumo:
Exercises, exam questions and solutions for a fourth year hyperbolic geometry course. Diagrams for the questions are all together in the support.zip file, as .eps files
Resumo:
Based on the experiences of Colombia, Brazil and Bolivia, the paper proposes a general analytical framework for participatory mechanisms. The analysis is oriented to detect the incentives in each system and theethics and behavior sustaining them. It investigates about the sustainability of participatory democracy, in the face of tensions with representative democracy. The article presents a theoretical framework built from theseexperiences of institutional design and political practice, and confronts it against the theoretical conceptualizationsof participatory democracy in Bobbio, Sartori, Elster and Nino, among others. In this context, different waysin which those schemes can be inserted in the political systems become apparent, along with the variables thatresult from combining elements of direct, representative and participatory democracy”
Resumo:
Recurso para la evaluación de la enseñanza y el aprendizaje de la geometría en la enseñanza secundaria desde la perspectiva de los nuevos docentes y de los que tienen más experiencia. Está diseñado para ampliar y profundizar el conocimiento de la materia y ofrecer consejos prácticos e ideas para el aula en el contexto de la práctica y la investigación actual. Hace especial hincapié en: comprender las ideas fundamentales del currículo de geometría; el aprendizaje de la geometría de manera efectiva; la investigación y la práctica actual; las ideas erróneas y los errores; el razonamiento de la geometría; la solución de problemas; el papel de la tecnología en el aprendizaje de la geometría.
Resumo:
Monográfico con el título: 'Identidad y educación'. Resumen basado en el de la publicación
Resumo:
Resumen basado en el de la publicación