994 resultados para Dresden. Kreuzschule. Bibliothek.
Resumo:
Childhood and education in Munich; assimilated bourgeois Jewish family; father was a lawyer and titular professor; writer Ludwig Thoma assistant of his father; vacations in Marienbad; military service; university studies in Munich with Lujo Brentano; apprenticeship as lawyer; political interest and joining of SPD; contacts with later Bavarian president Kurt Eisner; as soldier in World War I; diplomatic mission in Tirol during last days of World War I; refused to take part in Bavarian revolution of November 1918, but close contacts with Eisner government; exact account of two Bavarian soviet republics in 1919 and their protagonists (Gustav Landauer, Erich Muehsam, Eugen Levine); Bavarian politics and justice 1919-1933; description of Paul Nikolaus Cossmann and his reactionary journal "Sueddeutsche Monatshefte"; advocate of Eisner's secretary Felix Fechenbach in political trial against accusations by Cossmann; expulsion of East European Jews by Bavarian government 1923; Hitler coup attempt 1923; election campaign March 1933; Nazi takeover of power in Bavaria; dismissal as lawyer; decision to emigrate.
Resumo:
A chapter from Adolph's book, “Die Freiherrin Kaskel in Dresden, about the Baronesse von Kaskel, née Oppenheim, who was entertaining Saxony’s performing artists in her villa.
Resumo:
The famous philosopher Leibniz (1646-1716) was also active in the (cultural) politics of his time. His interest in forming scientific societies never waned and his efforts led to the founding of the Berlin Academy of Sciences. He also played a part in the founding of the Dresden Academy of Science and the St. Petersburg Academy of Science. Though Leibniz's models for the scientific society were the Royal Society and the Royal Science Academy of France, his pansophistic vision of scientific cooperation sometimes took on utopian dimensions. In this paper, I will present Leibniz's ideas of scientific cooperation as a kind of religious activity and discuss his various schemes for the founding of such scientific societies.
Resumo:
We study a sensor node with an energy harvesting source. In any slot,the sensor node is in one of two modes: Wake or Sleep. The generated energy is stored in a buffer. The sensor node senses a random field and generates a packet when it is awake. These packets are stored in a queue and transmitted in the wake mode using the energy available in the energy buffer. We obtain energy management policies which minimize a linear combination of the mean queue length and the mean data loss rate. Then, we obtain two easily implementable suboptimal policies and compare their performance to that of the optimal policy. Next, we extend the Throughput Optimal policy developed in our previous work to sensors with two modes. Via this policy, we can increase the through put substantially and stabilize the data queue by allowing the node to sleep in some slots and to drop some generated packets. This policy requires minimal statistical knowledge of the system. We also modify this policy to decrease the switching costs.
A Low ML-Decoding Complexity, High Coding Gain, Full-Rate, Full-Diversity STBC for 4 x 2 MIMO System
Resumo:
This paper proposes a full-rate, full-diversity space-time block code(STBC) with low maximum likelihood (ML) decoding complexity and high coding gain for the 4 transmit antenna, 2 receive antenna (4 x 2) multiple-input multiple-output (MIMO) system that employs 4/16-QAM. For such a system, the best code known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed another STBC (BHV code) for 4-QAM which has lower ML-decoding complexity than the DjABBA code but does not have full-diversity like the DjABBA code. The code proposed in this paper has the same ML-decoding complexity as the BHV code for any square M-QAM but has full-diversity for 4- and 16-QAM. Compared with the DjABBA code, the proposed code has lower ML-decoding complexity for square M-QAM constellation, higher coding gain for 4- and 16-QAM, and hence a better codeword error rate (CER) performance. Simulation results confirming this are presented.
Resumo:
Receive antenna selection (AS) reduces the hardware complexity of multi-antenna receivers by dynamically connecting an instantaneously best antenna element to the available radio frequency (RF) chain. Due to the hardware constraints, the channels at various antenna elements have to be sounded sequentially to obtain estimates that are required for selecting the ``best'' antenna and for coherently demodulating data. Consequently, the channel state information at different antennas is outdated by different amounts. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, the channel estimates of different antennas should be weighted differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then derive an explicit formula for the optimal selection weights that minimize the SEP. We find that when selection weights are not used, the SEP need not improve as the number of antenna elements increases, which is in contrast to the ideal channel estimation case. However, the optimal selection weights remedy this situation and significantly improve performance.
Resumo:
Modern wireline and wireless communication devices are multimode and multifunctional communication devices. In order to support multiple standards on a single platform, it is necessary to develop a reconfigurable architecture that can provide the required flexibility and performance. The Channel decoder is one of the most compute intensive and essential elements of any communication system. Most of the standards require a reconfigurable Channel decoder that is capable of performing Viterbi decoding and Turbo decoding. Furthermore, the Channel decoder needs to support different configurations of Viterbi and Turbo decoders. In this paper, we propose a reconfigurable Channel decoder that can be reconfigured for standards such as WCDMA, CDMA2000, IEEE802.11, DAB, DVB and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. A multiprocessor approach has been followed to provide higher throughput and scalable power consumption in various configurations of the reconfigurable Viterbi decoder and Turbo decoder. We have proposed A Hybrid register exchange approach for multiprocessor architecture to minimize power consumption.
Resumo:
In this paper, we propose a training-based channel estimation scheme for large non-orthogonal space-time block coded (STBC) MIMO systems.The proposed scheme employs a block transmission strategy where an N-t x N-t pilot matrix is sent (for training purposes) followed by several N-t x N-t square data STBC matrices, where Nt is the number of transmit antennas. At the receiver, we iterate between channel estimation (using an MMSE estimator) and detection (using a low-complexity likelihood ascent search (LAS) detector) till convergence or for a fixed number of iterations. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed scheme at low complexities. The fact that we could show such good results for large STBCs (e.g., 16 x 16 STBC from cyclic division algebras) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot-based channel estimation and turbo coding) establishes the effectiveness of the proposed scheme.
Resumo:
Relay selection for cooperative communications has attracted considerable research interest recently. While several criteria have been proposed for selecting one or more relays and analyzed, mechanisms that perform the selection in a distributed manner have received relatively less attention. In this paper, we analyze a splitting algorithm for selecting the single best relay amongst a known number of active nodes in a cooperative network. We develop new and exact asymptotic analysis for computing the average number of slots required to resolve the best relay. We then propose and analyze a new algorithm that addresses the general problem of selecting the best Q >= 1 relays. Regardless of the number of relays, the algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. Our analysis also brings out an intimate relationship between multiple access selection and multiple access control algorithms.
Resumo:
Minor addition of B to the Ti-6Al-4V alloy reduces the prior beta grain size by more than an order of magnitude. TiB formed in-situ in the process has been noted to decorate the grain boundaries. This microstructural modification influences the mechanical behavior of the Ti-6Al-4V alloy significantly. In this paper, an overview of our current research on tensile properties, fracture toughness as well as notched and un-notched fatigue properties of Ti-6Al-4V-xB with x varying between 0.0 to 0.55 wt.% is presented. A quantitative relationship between the microstructural length scales and the various mechanical properties have been developed. Moreover, the effect of the presence of hard and brittle TiB has also been studied.
Resumo:
In ceramics, dopants offer the possibility of higher creep rates by enhancing diffusion. The present study examines the potential for high strain rate superplasticity in a TiO2 doped zirconia, by conducting creep experiments together with microstructural characterization. It is shown that both pure and doped zirconia exhibit transitions in creep behaviour from Coble diffusion creep with n similar to 1 to an interface controlled process with n similar to 2. Doping with TiO2 enhances the creep rate by over an order of magnitude. There is evidence of substantial grain boundary sliding, consistent with diffusion creep.
Resumo:
Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.
Resumo:
Many industrial processes involve reaction between the two immiscible liquid systems. It is very important to increase the efficiency and productivity of such reactions. One of the important processes that involve such reactions is the metal-slag system. To increase the reaction rate or efficiency, one must increase the contact surface area of one of the phases. This is either done by emulsifying the slag into the metal phase or the metal into the slag phase. The latter is preferred from the stability viewpoint. Recently, we have proposed a simple and elegant mathematical model to describe metal emulsification in the presence of bottom gas bubbling. The same model is being extended here. The effect of slag and metal phase viscosity, density and metal droplet size on the metal droplet velocity in the slag phase is discussed for the above mentioned metal emulsification process. The models results have been compared with experimental data.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!