906 resultados para Variable structure systems


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiclassical theories such as the Thomas-Fermi and Wigner-Kirkwood methods give a good description of the smooth average part of the total energy of a Fermi gas in some external potential when the chemical potential is varied. However, in systems with a fixed number of particles N, these methods overbind the actual average of the quantum energy as N is varied. We describe a theory that accounts for this effect. Numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard-wall cavity, and for self-consistent calculations of atomic nuclei. In the latter case, the influence of deformations on the average behavior of the energy is also considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Six new copper complexes of di-2-pyridyl ketone nicotinoylhydrazone (HDKN) have been synthesized. The complexes have been characterized by a variety of spectroscopic techniques and the structure of [Cu(DKN)2]·H2O has been determined by single crystal X-ray diffraction. The compound [Cu(DKN)2]·H2O crystallized in the monoclinic space group P21 and has a distorted octahedral geometry. The IR spectra revealed the presence of variable modes of chelation for the investigated ligand. The EPR spectra of compounds [Cu2(DKN)2( -N3)2] and [Cu2(DKN)2( -NCS)2] in polycrystalline state suggest a dimeric structure as they exhibited a half field signal, which indicate the presence of a weak interaction between two Cu(II) ions in these complexes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of self-assembly as a strategy for the synthesis has been confined largely to molecules, because of the importance of manipulating the structure of matter at the molecular scale. We have investigated the influence of temperature and pH, in addition to the concentration of the capping agent used for the formation of the nano-bio conjugates. For example, the formation of the narrower size distribution of the nanoparticles was observed with the increase in the concentration of the protein, which supports the fact that γ-globulin acts both as a controller of nucleation as well as stabiliser. As analyzed through various photophysical, biophysical and microscopic techniques such as TEM, AFM, C-AFM, SEM, DLS, OPM, CD and FTIR, we observed that the initial photoactivation of γ-globulin at pH 12 for 3 h resulted in small protein fibres of ca. Further irradiation for 24 h, led to the formation of selfassembled long fibres of the protein of ca. 5-6 nm and observation of surface plasmon resonance band at around 520 nm with the concomitant quenching of luminescence intensity at 680 nm. The observation of light triggered self-assembly of the protein and its effect on controlling the fate of the anchored nanoparticles can be compared with the naturally occurring process such as photomorphogenesis.Furthermore,our approach offers a way to understand the role played by the self-assembly of the protein in ordering and knock out of the metal nanoparticles and also in the design of nano-biohybrid materials for medicinal and optoelectronic applications. Investigation of the potential applications of NIR absorbing and water soluble squaraine dyes 1-3 for protein labeling and anti-amyloid agents forms the subject matter of the third chapter of the thesis. The study of their interactions with various proteins revealed that 1-3 showed unique interactions towards serum albumins as well as lysozyme. 69%, 71% and 49% in the absorption spectra as well as significant quenching in the fluorescence intensity of the dyes 1-3, respectively. Half-reciprocal analysis of the absorption data and isothermal titration calorimetric (ITC) analysis of the titration experiments gave a 1:1 stoichiometry for the complexes formed between the lysozyme and squaraine dyes with association constants (Kass) in the range 104-105 M-1. We have determined the changes in the free energy (ΔG) for the complex formation and the values are found to be -30.78, -32.31 and -28.58 kJmol-1, respectively for the dyes 1, 2 and 3. Furthermore, we have observed a strong induced CD (ICD) signal corresponding to the squaraine chromophore in the case of the halogenated squaraine dyes 2 and 3 at 636 and 637 nm confirming the complex formation in these cases. To understand the nature of interaction of the squaraine dyes 1-3 with lysozyme, we have investigated the interaction of dyes 1-3 with different amino acids. These results indicated that the dyes 1-3 showed significant interactions with cysteine and glutamic acid which are present in the side chains of lysozyme. In addition the temperature dependent studies have revealed that the interaction of the dye and the lysozyme are irreversible. Furthermore, we have investigated the interactions of these NIR dyes 1-3 with β- amyloid fibres derived from lysozyme to evaluate their potential as inhibitors of this biologically important protein aggregation. These β-amyloid fibrils were insoluble protein aggregates that have been associated with a range of neurodegenerative diseases, including Huntington, Alzheimer’s, Parkinson’s, and Creutzfeldt-Jakob diseases. We have synthesized amyloid fibres from lysozyme through its incubation in acidic solution below pH 4 and by allowing to form amyloid fibres at elevated temperature. To quantify the binding affinities of the squaraine dyes 1-3 with β-amyloids, we have carried out the isothermal titration calorimetric (ITC) measurements. The association constants were determined and are found to be 1.2 × 105, 3.6× 105 and 3.2 × 105 M-1 for the dyes, 1-3, respectively. To gain more insights into the amyloid inhibiting nature of the squaraine dyes under investigations, we have carried out thioflavin assay, CD, isothermal titration calorimetry and microscopic analysis. The addition of the dyes 1-3 (5μM) led to the complete quenching in the apparent thioflavin fluorescence, thereby indicating the destabilization of β-amyloid fibres in the presence of the squaraine dyes. Further, the inhibition of the amyloid fibres by the squaraine dyes 1-3, has been evidenced though the DLS, TEM AFM and SAED, wherein we observed the complete destabilization of the amyloid fibre and transformation of the fibre into spherical particles of ca. These results demonstrate the fact that the squaraine dyes 1-3 can act as protein labeling agents as well as the inhibitors of the protein amyloidogenesis. The last chapter of the thesis describes the synthesis and investigation of selfassembly as well as bio-imaging aspects of a few novel tetraphenylethene conjugates 4-6.Expectedly, these conjugates showed significant solvatochromism and exhibited a hypsochromic shift (negative solvatochromism) as the solvent polarity increased, and these observations were justified though theoretical studies employing the B3LYP/6-31g method. We have investigated the self-assembly properties of these D-A conjugates though variation in the percentage of water in acetonitrile solution due to the formation of nanoaggregates. Further the contour map of the observed fluorescence intensity as a function of the fluorescence excitation and emission wavelength confirmed the formation of J-type aggregates in these cases. To have a better understanding of the type of self-assemblies formed from the TPE conjugates 4-6, we have carried out the morphological analysis through various microscopic techniques such as DLS, SEM and TEM. 70%, we observed rod shape architectures having ~ 780 nm in diameter and ~ 12 μM in length as evidenced through TEM and SEM analysis. We have made similar observations with the dodecyl conjugate 5 at ca. 70% and 50% water/acetonitrile mixtures, the aggregates formed from 4 and 5 were found to be highly crystalline and such structures were transformed to amorphous nature as the water fraction was increased to 99%. To evaluate the potential of the conjugate as bio-imaging agents, we have carried out their in vitro cytotoxicity and cellular uptake studies though MTT assay, flow cytometric and confocal laser scanning microscopic techniques. Thus nanoparticle of these conjugates which exhibited efficient emission, large stoke shift, good stability, biocompatibility and excellent cellular imaging properties can have potential applications for tracking cells as well as in cell-based therapies. In summary we have synthesized novel functional organic chromophores and have studied systematic investigation of self-assembly of these synthetic and biological building blocks under a variety of conditions. The investigation of interaction of water soluble NIR squaraine dyes with lysozyme indicates that these dyes can act as the protein labeling agents and the efficiency of inhibition of β-amyloid indicate, thereby their potential as anti-amyloid agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ab initio self-consistent DFS calculations are performed for five different symmetric atomic systems from Ar-Ar to Pb-Pb. The level structure for the {2p_\pi}-{2p_\sigma} crossing as function of the united atomic charge Z_u is studied and interpreted. Manybody effects, spin-orbit splitting, direct relativistic effects as well as indirect relativistic effects are differently important for different Z_u. For the I-I system a comparison with other calculations is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quasimolecular M radiation emitted in collisions between Xe ions of up to 6 MeV energy and solid targets of Ta, Au, Pb and Bi, as well as a gaseous target of Pb(CH_3)_4, has been studied. Using a realistic theoretical correlation diagram, a semiquantitative explanation of the observed peak structure is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While most data analysis and decision support tools use numerical aspects of the data, Conceptual Information Systems focus on their conceptual structure. This paper discusses how both approaches can be combined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conceptual Information Systems unfold the conceptual structure of data stored in relational databases. In the design phase of the system, conceptual hierarchies have to be created which describe different aspects of the data. In this paper, we describe two principal ways of designing such conceptual hierarchies, data driven design and theory driven design and discuss advantages and drawbacks. The central part of the paper shows how Attribute Exploration, a knowledge acquisition tool developped by B. Ganter can be applied for narrowing the gap between both approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on transition-metal nanoalloy clusters composed of a few atoms is fascinating by their unusual properties due to the interplay among the structure, chemical order and magnetism. Such nanoalloy clusters, can be used to construct nanometer devices for technological applications by manipulating their remarkable magnetic, chemical and optical properties. Determining the nanoscopic features exhibited by the magnetic alloy clusters signifies the need for a systematic global and local exploration of their potential-energy surface in order to identify all the relevant energetically low-lying magnetic isomers. In this thesis the sampling of the potential-energy surface has been performed by employing the state-of-the-art spin-polarized density-functional theory in combination with graph theory and the basin-hopping global optimization techniques. This combination is vital for a quantitative analysis of the quantum mechanical energetics. The first approach, i.e., spin-polarized density-functional theory together with the graph theory method, is applied to study the Fe$_m$Rh$_n$ and Co$_m$Pd$_n$ clusters having $N = m+n \leq 8$ atoms. We carried out a thorough and systematic sampling of the potential-energy surface by taking into account all possible initial cluster topologies, all different distributions of the two kinds of atoms within the cluster, the entire concentration range between the pure limits, and different initial magnetic configurations such as ferro- and anti-ferromagnetic coupling. The remarkable magnetic properties shown by FeRh and CoPd nanoclusters are attributed to the extremely reduced coordination number together with the charge transfer from 3$d$ to 4$d$ elements. The second approach, i.e., spin-polarized density-functional theory together with the basin-hopping method is applied to study the small Fe$_6$, Fe$_3$Rh$_3$ and Rh$_6$ and the larger Fe$_{13}$, Fe$_6$Rh$_7$ and Rh$_{13}$ clusters as illustrative benchmark systems. This method is able to identify the true ground-state structures of Fe$_6$ and Fe$_3$Rh$_3$ which were not obtained by using the first approach. However, both approaches predict a similar cluster for the ground-state of Rh$_6$. Moreover, the computational time taken by this approach is found to be significantly lower than the first approach. The ground-state structure of Fe$_{13}$ cluster is found to be an icosahedral structure, whereas Rh$_{13}$ and Fe$_6$Rh$_7$ isomers relax into cage-like and layered-like structures, respectively. All the clusters display a remarkable variety of structural and magnetic behaviors. It is observed that the isomers having similar shape with small distortion with respect to each other can exhibit quite different magnetic moments. This has been interpreted as a probable artifact of spin-rotational symmetry breaking introduced by the spin-polarized GGA. The possibility of combining the spin-polarized density-functional theory with some other global optimization techniques such as minima-hopping method could be the next step in this direction. This combination is expected to be an ideal sampling approach having the advantage of avoiding efficiently the search over irrelevant regions of the potential energy surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit wurde ein gemischt-ganzzahliges lineares Einsatzoptimierungsmodell für Kraftwerke und Speicher aufgebaut und für die Untersuchung der Energieversorgung Deutschlands im Jahre 2050 gemäß den Leitstudie-Szenarien 2050 A und 2050 C ([Nitsch und Andere, 2012]) verwendet, in denen erneuerbare Energien einen Anteil von über 85 % an der Stromerzeugung haben und die Wind- und Solarenergie starke Schwankungen der durch steuerbare Kraftwerke und Speicher zu deckenden residualen Stromnachfrage (Residuallast) verursachen. In Szenario 2050 A sind 67 TWh Wasserstoff, die elektrolytisch aus erneuerbarem Strom zu erzeugen sind, für den Verkehr vorgesehen. In Szenario 2050 C ist kein Wasserstoff für den Verkehr vorgesehen und die effizientere Elektromobilität hat einen Anteil von 100% am Individualverkehr. Daher wird weniger erneuerbarer Strom zur Erreichung desselben erneuerbaren Anteils im Verkehrssektor benötigt. Da desweiteren Elektrofahrzeuge Lastmanagementpotentiale bieten, weisen die Residuallasten der Szenarien eine unterschiedliche zeitliche Charakteristik und Jahressumme auf. Der Schwerpunkt der Betrachtung lag auf der Ermittlung der Auslastung und Fahrweise des in den Szenarien unterstellten ’Kraftwerks’-parks bestehend aus Kraftwerken zur reinen Stromerzeugung, Kraft-Wärme-Kopplungskraftwerken, die mit Wärmespeichern, elektrischen Heizstäben und Gas-Backupkesseln ausgestattet sind, Stromspeichern und Wärmepumpen, die durch Wärmespeicher zum Lastmanagment eingesetzt werden können. Der Fahrplan dieser Komponenten wurde auf minimale variable Gesamtkosten der Strom- und Wärmeerzeugung über einen Planungshorizont von jeweils vier Tagen hin optimiert. Das Optimierungsproblem wurde mit dem linearen Branch-and-Cut-Solver der software CPLEX gelöst. Mittels sogenannter rollierender Planung wurde durch Zusammensetzen der Planungsergebnisse für überlappende Planungsperioden der Kraftwerks- und Speichereinsatz für die kompletten Szenariojahre erhalten. Es wurde gezeigt, dass der KWK-Anteil an der Wärmelastdeckung gering ist. Dies wurde begründet durch die zeitliche Struktur der Stromresiduallast, die wärmeseitige Dimensionierung der Anlagen und die Tatsache, dass nur eine kurzfristige Speicherung von Wärme vorgesehen war. Die wärmeseitige Dimensionierung der KWK stellte eine Begrenzung des Deckungsanteils dar, da im Winter bei hoher Stromresiduallast nur wenig freie Leistung zur Beladung der Speicher zur Verfügung stand. In den Berechnungen für das Szenario 2050 A und C lag der mittlere Deckungsanteil der KWK an der Wärmenachfrage von ca. 100 TWh_th bei 40 bzw. 60 %, obwohl die Auslegung der KWK einen theoretischen Anteil von über 97 % an der Wärmelastdeckung erlaubt hätte, gäbe es die Beschränkungen durch die Stromseite nicht. Desweiteren wurde die CO2-Vermeidungswirkung der KWK-Wärmespeicher und des Lastmanagements mit Wärmepumpen untersucht. In Szenario 2050 A ergab sich keine signifikante CO2-Vermeidungswirkung der KWK-Wärmespeicher, in Szenario 2050 C hingegen ergab sich eine geringe aber signifikante CO2-Einsparung in Höhe von 1,6 % der Gesamtemissionen der Stromerzeugung und KWK-gebundenen Wärmeversorgung. Das Lastmanagement mit Wärmepumpen vermied Emissionen von 110 Tausend Tonnen CO2 (0,4 % der Gesamtemissionen) in Szenario A und 213 Tausend Tonnen in Szenario C (0,8 % der Gesamtemissionen). Es wurden darüber hinaus Betrachtungen zur Konkurrenz zwischen solarthermischer Nahwärme und KWK bei Einspeisung in dieselben Wärmenetze vorgenommen. Eine weitere Einschränkung der KWK-Erzeugung durch den Einspeisevorrang der Solarthermie wurde festgestellt. Ferner wurde eine untere Grenze von 6,5 bzw. 8,8 TWh_th für die in den Szenarien mindestens benötigte Wasserstoff-Speicherkapazität ermittelt. Die Ergebnisse dieser Arbeit legen nahe, das technisch-ökonomische Potential von Langzeitwärmespeichern für eine bessere Integration von KWK ins System zu ermitteln bzw. generell nach geeigneteren Wärmesektorszenarien zu suchen, da deutlich wurde, dass für die öffentliche Wärmeversorgung die KWK in Kombination mit Kurzzeitwärmespeicherung, Gaskesseln und elektrischen Heizern keine sehr effektive CO2 -Reduktion in den Szenarien erreicht. Es sollte dabei z.B. untersucht werden, ob ein multivalentes System aus KWK, Wärmespeichern und Wärmepumpen eine ökonomisch darstellbare Alternative sein könnte und im Anschluss eine Betrachtung der optimalen Anteile von KWK, Wärmepumpen und Solarthermie im Wärmemarkt vorgenommen werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden Systeme von parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (engl.: systems of parallel communicating restarting automata; abgekürzt PCRA-Systeme) vorgestellt und untersucht. Dabei werden zwei bekannte Konzepte aus den Bereichen Formale Sprachen und Automatentheorie miteinander vescrknüpft: das Modell der Restart-Automaten und die sogenannten PC-Systeme (systems of parallel communicating components). Ein PCRA-System besteht aus endlich vielen Restart-Automaten, welche einerseits parallel und unabhängig voneinander lokale Berechnungen durchführen und andererseits miteinander kommunizieren dürfen. Die Kommunikation erfolgt dabei durch ein festgelegtes Kommunikationsprotokoll, das mithilfe von speziellen Kommunikationszuständen realisiert wird. Ein wesentliches Merkmal hinsichtlich der Kommunikationsstruktur in Systemen von miteinander kooperierenden Komponenten ist, ob die Kommunikation zentralisiert oder nichtzentralisiert erfolgt. Während in einer nichtzentralisierten Kommunikationsstruktur jede Komponente mit jeder anderen Komponente kommunizieren darf, findet jegliche Kommunikation innerhalb einer zentralisierten Kommunikationsstruktur ausschließlich mit einer ausgewählten Master-Komponente statt. Eines der wichtigsten Resultate dieser Arbeit zeigt, dass zentralisierte Systeme und nichtzentralisierte Systeme die gleiche Berechnungsstärke besitzen (das ist im Allgemeinen bei PC-Systemen nicht so). Darüber hinaus bewirkt auch die Verwendung von Multicast- oder Broadcast-Kommunikationsansätzen neben Punkt-zu-Punkt-Kommunikationen keine Erhöhung der Berechnungsstärke. Desweiteren wird die Ausdrucksstärke von PCRA-Systemen untersucht und mit der von PC-Systemen von endlichen Automaten und mit der von Mehrkopfautomaten verglichen. PC-Systeme von endlichen Automaten besitzen bekanntermaßen die gleiche Ausdrucksstärke wie Einwegmehrkopfautomaten und bilden eine untere Schranke für die Ausdrucksstärke von PCRA-Systemen mit Einwegkomponenten. Tatsächlich sind PCRA-Systeme auch dann stärker als PC-Systeme von endlichen Automaten, wenn die Komponenten für sich genommen die gleiche Ausdrucksstärke besitzen, also die regulären Sprachen charakterisieren. Für PCRA-Systeme mit Zweiwegekomponenten werden als untere Schranke die Sprachklassen der Zweiwegemehrkopfautomaten im deterministischen und im nichtdeterministischen Fall gezeigt, welche wiederum den bekannten Komplexitätsklassen L (deterministisch logarithmischer Platz) und NL (nichtdeterministisch logarithmischer Platz) entsprechen. Als obere Schranke wird die Klasse der kontextsensitiven Sprachen gezeigt. Außerdem werden Erweiterungen von Restart-Automaten betrachtet (nonforgetting-Eigenschaft, shrinking-Eigenschaft), welche bei einzelnen Komponenten eine Erhöhung der Berechnungsstärke bewirken, in Systemen jedoch deren Stärke nicht erhöhen. Die von PCRA-Systemen charakterisierten Sprachklassen sind unter diversen Sprachoperationen abgeschlossen und einige Sprachklassen sind sogar abstrakte Sprachfamilien (sogenannte AFL's). Abschließend werden für PCRA-Systeme spezifische Probleme auf ihre Entscheidbarkeit hin untersucht. Es wird gezeigt, dass Leerheit, Universalität, Inklusion, Gleichheit und Endlichkeit bereits für Systeme mit zwei Restart-Automaten des schwächsten Typs nicht semientscheidbar sind. Für das Wortproblem wird gezeigt, dass es im deterministischen Fall in quadratischer Zeit und im nichtdeterministischen Fall in exponentieller Zeit entscheidbar ist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Democratic Republic of the Congo (DRC), pigs are raised almost exclusively by smallholders either in periurban areas of major cities such as Kinshasa or in rural villages. Unfortunately, little information is available regarding pig production in the Western part of the DRC, wherefore a survey was carried out to characterize and compare 319 pig production systems in their management and feeding strategies, along a periurban - rural gradient inWestern provinces of the DRC. Pig breeding was the main source of income (43%) and half of respondents were active in mixed pig and crop production, mainly vegetable garden. Depending on the location, smallholders owned on average 18 pigs, including four sows. Piglet mortality rate varied from 9.5 to 21.8% while average weaned age ranged between 2.2 and 2.8 months. The major causes of mortality reported by the farmers were African swine fever 98 %, swine erysipelas (60 %), erysipelas trypanosomiasis (31 %), swine worm infection (17 %), and diarrhoea (12 %). The majority of the pigs were reared in pens without free roaming and fed essentially with locally available by-products and forage plants whose nature varied according with the location of the farm. The pig production systems depended on the local environment; particularly in terms of workforces, herd structure and characteristics, production parameters, pig building materials, selling price and in feed resources. It can be concluded that an improvement of Congolese pig production systems should consider (1) a reduction of inbreeding, (2) an improvement in biosafety to reduce the incidence of African swine fever and the spread of other diseases, and (3) an improvement in feeding practices.