925 resultados para Picture and Image Generation
Resumo:
Background: The majority of studies have investigated the effect of exercise training (TR) on vascular responses in diabetic animals (DB), but none evaluated nitric oxide (NO) and advanced glycation end products (AGEs) formation associated with oxidant and antioxidant activities in femoral and coronary arteries from trained diabetic rats. Our hypothesis was that 8-week TR would alter AGEs levels in type 1 diabetic rats ameliorating vascular responsiveness. Methodology/Principal Findings: Male Wistar rats were divided into control sedentary (C/SD), sedentary diabetic (SD/DB), and trained diabetic (TR/DB). DB was induced by streptozotocin (i.p.: 60 mg/kg). TR was performed for 60 min per day, 5 days/week, during 8 weeks. Concentration-response curves to acetylcholine (ACh), sodium nitroprusside (SNP), phenylephrine (PHE) and tromboxane analog (U46619) were obtained. The protein expressions of eNOS, receptor for AGEs (RAGE), Cu/Zn-SOD and Mn-SOD were analyzed. Tissues NO production and reactive oxygen species (ROS) generation were evaluated. Plasma nitrate/nitrite (NOx-), superoxide dismutase (SOD), catalase (CAT), thiobarbituric acid reactive substances (TBARS) and N-epsilon-(carboxymethyl) lysine (CML, AGE biomarker). A rightward shift in the concentration-response curves to ACh was observed in femoral and coronary arteries from SD/DB that was accompanied by an increase in TBARS and CML levels. Decreased in the eNOS expression, tissues NO production and NOx- levels were associated with increased ROS generation. A positive interaction between the beneficial effect of TR on the relaxing responses to ACh and the reduction in TBARS and CML levels were observed without changing in antioxidant activities. The eNOS protein expression, tissues NO production and ROS generation were fully re-established in TR/DB, but plasma NOx- levels were partially restored. Conclusion: Shear stress induced by TR fully restores the eNOS/NO pathway in both preparations from non-treated diabetic rats, however, a massive production of AGEs still affecting relaxing responses possibly involving other endothelium-dependent vasodilator agents, mainly in coronary artery.
Resumo:
Objectives The effects of longterm ethanol consumption on the levels of nitric oxide (NO) and the expression of endothelial NO synthase (eNOS), inducible NO synthase (iNOS) and metalloproteinase-2 (MMP-2) were studied in rat kidney. Methods Male Wistar rats were treated with 20% ethanol (v/v) for 6 weeks. Nitrite and nitrate generation was measured by chemiluminescence. Protein and mRNA levels of eNOS and iNOS were assessed by immunohistochemistry and quantitative real-time polymerase chain reaction, respectively. MMP-2 activity was determined by gelatin zymography. Histopathological changes in kidneys and indices of renal function (creatinine and urea) and tissue injury (mitochondrial respiration) were also investigated. Results Chronic ethanol consumption did not alter malondialdehyde levels in the kidney. Ethanol consumption induced a significant increase in renal nitrite and nitrate levels. Treatment with ethanol increased mRNA expression of both eNOS and iNOS. Immunohistochemical assays showed increased immunostaining for eNOS and iNOS after treatment with ethanol. Kidneys from ethanol-treated rats showed increased activity of MMP-2. Histopathological investigation of kidneys from ethanol-treated animals revealed tubular necrosis. Indices of renal function and tissue injury were not altered in ethanol-treated rats. Conclusions Ethanol consumption increased renal metalloproteinase expression/activity, which was accompanied by histopathological changes in the kidney and elevated NO generation. Since iNOS-derived NO and MMPs contribute to progressive renal injury, the increased levels of NO and MMPs observed in ethanol-treated rats might contribute to progressive renal damage.
Resumo:
Abstract Background Xylella fastidiosa, a Gram-negative fastidious bacterium, grows in the xylem of several plants causing diseases such as citrus variegated chlorosis. As the xylem sap contains low concentrations of amino acids and other compounds, X. fastidiosa needs to cope with nitrogen limitation in its natural habitat. Results In this work, we performed a whole-genome microarray analysis of the X. fastidiosa nitrogen starvation response. A time course experiment (2, 8 and 12 hours) of cultures grown in defined medium under nitrogen starvation revealed many differentially expressed genes, such as those related to transport, nitrogen assimilation, amino acid biosynthesis, transcriptional regulation, and many genes encoding hypothetical proteins. In addition, a decrease in the expression levels of many genes involved in carbon metabolism and energy generation pathways was also observed. Comparison of gene expression profiles between the wild type strain and the rpoN null mutant allowed the identification of genes directly or indirectly induced by nitrogen starvation in a σ54-dependent manner. A more complete picture of the σ54 regulon was achieved by combining the transcriptome data with an in silico search for potential σ54-dependent promoters, using a position weight matrix approach. One of these σ54-predicted binding sites, located upstream of the glnA gene (encoding glutamine synthetase), was validated by primer extension assays, confirming that this gene has a σ54-dependent promoter. Conclusions Together, these results show that nitrogen starvation causes intense changes in the X. fastidiosa transcriptome and some of these differentially expressed genes belong to the σ54 regulon.
Resumo:
Abstract Introduction Several studies link hematological dysfunction to severity of sepsis. Previously we showed that platelet-derived microparticles from septic patients induce vascular cell apoptosis through the NADPH oxidase-dependent release of superoxide. We sought to further characterize the microparticle-dependent vascular injury pathway. Methods During septic shock there is increased generation of thrombin, TNF-α and nitric oxide (NO). Human platelets were exposed for 1 hour to the NO donor diethylamine-NONOate (0.5 μM), lipopolysaccharide (LPS; 100 ng/ml), TNF-α (40 ng/ml), or thrombin (5 IU/ml). Microparticles were recovered through filtration and ultracentrifugation and analyzed by electron microscopy, flow cytometry or Western blotting for protein identification. Redox activity was characterized by lucigenin (5 μM) or coelenterazine (5 μM) luminescence and by 4,5-diaminofluorescein (10 mM) and 2',7'-dichlorofluorescein (10 mM) fluorescence. Endothelial cell apoptosis was detected by phosphatidylserine exposure and by measurement of caspase-3 activity with an enzyme-linked immunoassay. Results Size, morphology, high exposure of the tetraspanins CD9, CD63, and CD81, together with low phosphatidylserine, showed that platelets exposed to NONOate and LPS, but not to TNF-α or thrombin, generate microparticles similar to those recovered from septic patients, and characterize them as exosomes. Luminescence and fluorescence studies, and the use of specific inhibitors, revealed concomitant superoxide and NO generation. Western blots showed the presence of NO synthase II (but not isoforms I or III) and of the NADPH oxidase subunits p22phox, protein disulfide isomerase and Nox. Endothelial cells exposed to the exosomes underwent apoptosis and caspase-3 activation, which were inhibited by NO synthase inhibitors or by a superoxide dismutase mimetic and totally blocked by urate (1 mM), suggesting a role for the peroxynitrite radical. None of these redox properties and proapoptotic effects was evident in microparticles recovered from platelets exposed to thrombin or TNF-α. Conclusion We showed that, in sepsis, NO and bacterial elements are responsible for type-specific platelet-derived exosome generation. Those exosomes have an active role in vascular signaling as redox-active particles that can induce endothelial cell caspase-3 activation and apoptosis by generating superoxide, NO and peroxynitrite. Thus, exosomes must be considered for further developments in understanding and treating vascular dysfunction in sepsis.
Resumo:
Background: Few data on the definition of simple robust parameters to predict image noise in cardiac computed tomography (CT) exist. Objectives: To evaluate the value of a simple measure of subcutaneous tissue as a predictor of image noise in cardiac CT. Methods: 86 patients underwent prospective ECG-gated coronary computed tomographic angiography (CTA) and coronary calcium scoring (CAC) with 120 kV and 150 mA. The image quality was objectively measured by the image noise in the aorta in the cardiac CTA, and low noise was defined as noise < 30HU. The chest anteroposterior diameter and lateral width, the image noise in the aorta and the skin-sternum (SS) thickness were measured as predictors of cardiac CTA noise. The association of the predictors and image noise was performed by using Pearson correlation. Results: The mean radiation dose was 3.5 ± 1.5 mSv. The mean image noise in CT was 36.3 ± 8.5 HU, and the mean image noise in non-contrast scan was 17.7 ± 4.4 HU. All predictors were independently associated with cardiac CTA noise. The best predictors were SS thickness, with a correlation of 0.70 (p < 0.001), and noise in the non-contrast images, with a correlation of 0.73 (p < 0.001). When evaluating the ability to predict low image noise, the areas under the ROC curve for the non-contrast noise and for the SS thickness were 0.837 and 0.864, respectively. Conclusion: Both SS thickness and CAC noise are simple accurate predictors of cardiac CTA image noise. Those parameters can be incorporated in standard CT protocols to adequately adjust radiation exposure.
Resumo:
Stone Age research on Northern Europe frequently makes gross generalizations about the Mesolithic and Neolithic, although we still lack much basic knowledge on how the people lived. The transition from the Mesolithic to the Neolithic in Europe has been described as a radical shift from an economy dominated by marine resources to one solely dependent on farming. Both the occurrence and the geographical extent of such a drastic shift can be questioned, however. It is therefore important to start out at a more detailed level of evidence in order to present the overall picture, and to account for the variability even in such regional or chronological overviews. Fifteen Stone Age sites were included in this study, ranging chronologically from the Early Mesolithic to the Middle or Late Neolithic, c. 8300–2500 BC, and stretching geographically from the westernmost coast of Sweden to the easternmost part of Latvia within the confines of latitudes 55–59° N. The most prominent sites in terms of the number of human and faunal samples analysed are Zvejnieki, Västerbjers and Skateholm I–II. Human and faunal skeletal remains were subjected to stable carbon and nitrogen isotope analysis to study diet and ecology at the sites. Stable isotope analyses of human remains provide quantitative information on the relative importance of various food sources, an important addition to the qualitative data supplied by certain artefacts and structures or by faunal or botanical remains. A vast number of new radiocarbon dates were also obtained. In conclusion, a rich diversity in Stone Age dietary practice in the Baltic Region was demonstrated. Evidence ranging from the Early Mesolithic to the Late Neolithic show that neither chronology nor location alone can account for this variety, but that there are inevitably cultural factors as well. Food habits are culturally governed, and therefore we cannot automatically assume that people at similar sites will have the same diet. Stable isotope studies are very important here, since they tell us what people actually consumed, not only what was available, or what one single meal contained. We should not be deceived in inferring diet from ritually deposited remains, since things that were mentally important were not always important in daily life. Thus, although a ritual and symbolic norm may emphasize certain food categories, these may in fact contribute very little to the diet. By the progress of analysis of intra-individual variation, new data on life history changes have been produced, revealing mobility patterns, breastfeeding behaviour and certain dietary transitions. The inclusion of faunal data has proved invaluable for understanding the stable isotope ecology of a site, and thereby improve the precision of the interpretations of human stable isotope data. The special case of dogs, though, demonstrates that these animals are not useful for inferring human diet, since, due to the number of roles they possess in human society, dogs could deviate significantly from humans in their diet, and in several cases have been proved to do so. When evaluating radiocarbon data derived from human and animal remains from the Pitted-Ware site of Västerbjers on Gotland, the importance of establishing the stable isotope ecology of the site before making deductions on reservoir effects was further demonstrated. The main aim of this thesis has been to demonstrate the variation and diversity in human practices, challenging the view of a “monolithic” Stone Age. By looking at individuals and not only at populations, the whole range of human behaviour has been accounted for, also revealing discrepancies between norm and practice, which are frequently visible both in the archaeological record and in present-day human behaviour.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).
Resumo:
Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.
Resumo:
ZUSAMMENFASSUNG Die Tauglichkeit von Hybridmaterialien auf der Basis von Zinkphosphathydrat-Zementen zum Einsatz als korrosionshemmende anorganische Pigmente oder zur prothetischen und konservierenden Knochen- und Zahntherapie wird weltweit empirisch seit den neunziger Jahren intensiv erforscht. In der vorliegenden Arbeit wurden zuerst Referenzproben, d.h. alpha-und beta-Hopeite (Abk. a-,b-ZPT) dank eines hydrothermalen Kristallisationsverfahrens in wässerigem Milieu bei 20°C und 90°C hergestellt. Die Kristallstruktur beider Polymorphe des Zinkphosphattetrahydrats Zn3(PO4)2 4 H2O wurde komplett bestimmt. Einkristall-strukturanalyse zeigt, daß der Hauptunterschied zwischen der alpha-und beta-Form des Zinkphosphattetrahydrats in zwei verschiedenen Anordnungen der Wasserstoffbrücken liegt. Die entsprechenden drei- und zweidimensionalen Anordnungen der Wasserstoffbrücken der a-und b-ZPT induzieren jeweils unterschiedliches thermisches Verhalten beim Aufwärmen. Während die alpha-Form ihr Kristallwasser in zwei definierten Stufen verliert, erzeugt die beta-Form instabile Dehydratationsprodukt. Dieses entspricht zwei unabhängigen, aber nebeneinander ablaufenden Dehydratationsmechanismen: (i) bei niedrigen Heizraten einen zweidimensionalen Johnson-Mehl-Avrami (JMA) Mechanismus auf der (011) Ebene, der einerseits bevorzugt an Kristallkanten stattfindet und anderseits von existierenden Kristalldefekten auf Oberflächen gesteuert wird; (ii) bei hohen Heizraten einem zweidimensionalen Diffusionsmechanismus (D2), der zuerst auf der (101) Ebene und dann auf der (110) Ebene erfolgt. Durch die Betrachtung der ZPT Dehydratation als irreversibele heterogene Festkörperstufenreaktion wurde dank eines „ähnlichen Endprodukt“-Protokolls das Dehydratationsphasendiagramm aufgestellt. Es beschreibt die möglichen Zusammenhänge zwischen den verschiedenen Hydratationszuständen und weist auf die Existenz eines Übergangszustandes um 170°C (d.h. Reaktion b-ZPT a-ZPT) hin. Daneben wurde auch ein gezieltes chemisches Ätzverfahren mit verdünnten H3PO4- und NH3 Lösungen angewendet, um die ersten Stufe des Herauslösens von Zinkphosphat genau zu untersuchen. Allerdings zeigen alpha- und beta-Hopeite charakteristische hexagonale und kubische Ätzgruben, die sich unter kristallographischer Kontrolle verbreitern. Eine zuverlässige Beschreibung der Oberfächenchemie und Topologie konnte nur durch AFM und FFM Experimente erfolgen. Gleichzeitig konnte in dieser Weise die Oberflächendefektdichte und-verteilung und die Volumenauflösungsrate von a-ZPT und b-ZPT bestimmt werden. Auf einem zweiten Weg wurde eine innovative Strategie zur Herstellung von basischen Zinkphosphatpigmenten erster und zweiter Generation (d.h. NaZnPO4 1H2O und Na2ZnPO4(OH) 2H2O) mit dem Einsatz von einerseits oberflächenmodifizierten Polystyrolatices (z.B. produziert durch ein Miniemulsionspolymerisationsverfahren) und anderseits von Dendrimeren auf der Basis von Polyamidoamid (PAMAM) beschritten. Die erhaltene Zeolithstruktur (ZPO) hat in Abhängigkeit von steigendem Natrium und Wassergehalt unterschiedliche kontrollierte Morphologie: hexagonal, würfelförmig, herzförmig, sechsarmige Sterne, lanzettenförmige Dendrite, usw. Zur quantitativen Evaluierung des Polymereinbaus in der Kristallstruktur wurden carboxylierte fluoreszenzmarkierte Latices eingesetzt. Es zeigt sich, daß Polymeradditive nicht nur das Wachstum bis zu 8 µm.min-1 reduzierten. Trotzdem scheint es auch als starker Nukleationsbeschleuniger zu wirken. Dank der Koordinationschemie (d.h. Bildung eines sechszentrigen Komplexes L-COO-Zn-PO4*H2O mit Ligandenaustausch) konnten zwei einfache Mechanismen zur Wirkung von Latexpartikeln bei der ZPO Kristallisation aufgezeigt werden: (i) ein Intrakorona- und (ii) ein Extrakorona-Keimbildungsmechanismus. Weiterhin wurde die Effizienz eines Kurzzeit- und Langzeitkorrosionschutzes durch maßgeschneiderte ZPO/ZPT Pigmente und kontrollierte Freisetzung von Phosphationen in zwei Näherungen des Auslösungsgleichgewichts abgeschätzt: (i) durch eine Auswaschungs-methode (thermodynamischer Prozess) und (ii) durch eine pH-Impulsmethode (kinetischer Prozess. Besonders deutlich wird der Ausflösungs-Fällungsmechanismus (d.h. der Metamorphismus). Die wesentliche Rolle den Natriumionen bei der Korrosionshemmung wird durch ein passendes zusammensetzungsabhängiges Auflösungsmodell (ZAAM) beschrieben, das mit dem Befund des Salzsprühteste und der Feuchtigkeitskammertests konsistent ist. Schließlich zeigt diese Arbeit das herausragende Potential funktionalisierter Latices (Polymer) bei der kontrollierten Mineralisation zur Herstellung maßgeschneiderter Zinkphosphat Materialien. Solche Hybridmaterialien werden dringend in der Entwicklung umweltfreundlicher Korrosionsschutzpigmente sowie in der Dentalmedizin benötigt.
Resumo:
A series of oligo-phenylene dendronised conjugated polymers was prepared. The divergent synthetic approach adopted allowed for the facile synthesis of a range of dendronised monomers from a common intermediate, e.g. first and second generation fluorene. Only the polymerisation of the first generation and alkylarylamine substituted dendronised fluorene monomers yielded high molecular weight materials, attributed to the low solubility of the remaining dendronised monomers. The alkylarylamine substituted dendronised poly(fluorene) was incorporated into an organic light emitting diode (OLED) and exhibited an increased colour stability in air compared to other poly(fluorenes). The concept of dendronisation was extended to poly(fluorenone), a previously insoluble material. The synthesis of the first soluble poly(fluorenone) was achieved by the incorporation of oligo-phenylene dendrons at the 4-position of fluorenone. The dendronisation of fluorenone allowed for a polymer with an Mn of 4.1 x 104 gmol-1 to be prepared. Cyclic voltammetry of the dendronised poly(fluorenone) showed that the electron affinity of the polymer was high and that the polymer is a promising n-type material. A dimer and trimer of indenofluorene (IF) were prepared from the monobromo IF. These oligomers were investigated by 2-dimensional wide angle x-ray spectroscopy (2D-WAXS), polarised optical microscopy (POM) and dielectric spectroscopy, and found to form highly ordered smetic phases. By attaching perylene dye as the end-capper on the IF oligomers, molecules that exhibited efficient Förster energy transfer were obtained. Indenofluorene monoketone, a potential defect structure for IF based OLED’s, was synthesised. The synthesis of this model defect structure allowed for the long wavelength emission in OLED’s to be identified as ketone defects. The long wavelength emission from the indenofluorene monoketone was found to be concentration dependent, and suggests that aggregate formation is occurring. An IF linked hexa-peri-hexabenzocoronene (HBC) dimer was synthesised. The 2D-WAXS images of this HBC dimer demonstrate that the molecule exhibits intercolumnar organisation perpendicular to the extrusion direction. POM images of mixtures of the HBC dimer mixed with an HBC with a low isotropic temperature demonstrated that the HBC dimer is mixing with the isotropic HBC.
Resumo:
In the present work, we apply both traditional and Next Generation Sequencing (NGS) tools to investigate some of the most important adaptive traits of wolves (Canis lupus). In the first part, we analyze the variability of three Major Histocompatibility Complex (MHC) class II genes in the Italian wolf population, also studying their possible role in mating choice and their influence on fitness traits. In the second section, as part of a larger canid genome project, we will exploit NGS data to investigate the transcript-level differences between the wolf and the dog genome that can be correlated to domestication.
Resumo:
The arid regions are dominated to a much larger degree than humid regions by major catastrophic events. Although most of Egypt lies within the great hot desert belt; it experiences especially in the north some torrential rainfall, which causes flash floods all over Sinai Peninsula. Flash floods in hot deserts are characterized by high velocity and low duration with a sharp discharge peak. Large sediment loads may be carried by floods threatening fields and settlements in the wadis and even people who are living there. The extreme spottiness of rare heavy rainfall, well known to desert people everywhere, precludes any efficient forecasting. Thus, although the limitation of data still reflects pre-satellite methods, chances of developing a warning system for floods in the desert seem remote. The relatively short flood-to-peak interval, a characteristic of desert floods, presents an additional impediment to the efficient use of warning systems. The present thesis contains introduction and five chapters, chapter one points out the physical settings of the study area. There are the geological settings such as outcrop lithology of the study area and the deposits. The alluvial deposits of Wadi Moreikh had been analyzed using OSL dating to know deposits and palaeoclimatic conditions. The chapter points out as well the stratigraphy and the structure geology containing main faults and folds. In addition, it manifests the pesent climate conditions such as temperature, humidity, wind and evaporation. Besides, it presents type of soils and natural vegetation cover of the study area using unsupervised classification for ETM+ images. Chapter two points out the morphometric analysis of the main basins and their drainage network in the study area. It is divided into three parts: The first part manifests the morphometric analysis of the drainage networks which had been extracted from two main sources, topographic maps and DEM images. Basins and drainage networks are considered as major influencing factors on the flash floods; Most of elements were studied which affect the network such as stream order, bifurcation ratio, stream lengths, stream frequency, drainage density, and drainage patterns. The second part of this chapter shows the morphometric analysis of basins such as area, dimensions, shape and surface. Whereas, the third part points the morphometric analysis of alluvial fans which form most of El-Qaá plain. Chapter three manifests the surface runoff through rainfall and losses analysis. The main subject in this chapter is rainfall which has been studied in detail; it is the main reason for runoff. Therefore, all rainfall characteristics are regarded here such as rainfall types, distribution, rainfall intensity, duration, frequency, and the relationship between rainfall and runoff. While the second part of this chapter concerns with water losses estimation by evaporation and infiltration which are together the main losses with direct effect on the high of runoff. Finally, chapter three points out the factors influencing desert runoff and runoff generation mechanism. Chapter four is concerned with assessment of flood hazard, it is important to estimate runoff and tocreate a map of affected areas. Therefore, the chapter consists of four main parts; first part manifests the runoff estimation, the different methods to estimate runoff and its variables such as runoff coefficient lag time, time of concentration, runoff volume, and frequency analysis of flash flood. While the second part points out the extreme event analysis. The third part shows the map of affected areas for every basin and the flash floods degrees. In this point, it has been depending on the DEM to extract the drainage networks and to determine the main streams which are normally more dangerous than others. Finally, part four presets the risk zone map of total study area which is of high inerest for planning activities. Chapter five as the last chapter concerns with flash flood Hazard mitigation. It consists of three main parts. First flood prediction and the method which can be used to predict and forecast the flood. The second part aims to determine the best methods which can be helpful to mitigate flood hazard in the arid zone and especially the study area. Whereas, the third part points out the development perspective for the study area indicating the suitable places in El-Qaá plain for using in economic activities.
Resumo:
Modern food systems are characterized by a high energy intensity as well as by the production of large amounts of waste, residuals and food losses. This inefficiency presents major consequences, in terms of GHG emissions, waste disposal, and natural resource depletion. The research hypothesis is that residual biomass material could contribute to the energetic needs of food systems, if recovered as an integrated renewable energy source (RES), leading to a sensitive reduction of the impacts of food systems, primarily in terms of fossil fuel consumption and GHG emissions. In order to assess these effects, a comparative life cycle assessment (LCA) has been conducted to compare two different food systems: a fossil fuel-based system and an integrated system with the use of residual as RES for self-consumption. The food product under analysis has been the peach nectar, from cultivation to end-of-life. The aim of this LCA is twofold. On one hand, it allows an evaluation of the energy inefficiencies related to agro-food waste. On the other hand, it illustrates how the integration of bioenergy into food systems could effectively contribute to reduce this inefficiency. Data about inputs and waste generated has been collected mainly through literature review and databases. Energy balance, GHG emissions (Global Warming Potential) and waste generation have been analyzed in order to identify the relative requirements and contribution of the different segments. An evaluation of the energy “loss” through the different categories of waste allowed to provide details about the consequences associated with its management and/or disposal. Results should provide an insight of the impacts associated with inefficiencies within food systems. The comparison provides a measure of the potential reuse of wasted biomass and the amount of energy recoverable, that could represent a first step for the formulation of specific policies on the integration of bioenergies for self-consumption.