873 resultados para Stand-Alone and Grid Connected PV applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characterizing Propionibacterium freudenreichii ssp. shermanii JS and Lactobacillus rhamnosus LC705 as a new probiotic combination: basic properties of JS and pilot in vivo assessment of the combination Each candidate probiotic strain has to have the documentation for the proper identification with current molecular tools, for the biological properties, for the safety aspects and for the health benefits in human trials if the intention is to apply the strain as health promoting culture in the commercial applications. No generalization based on species properties of an existing probiotic are valid for any novel strain, as strain specific differences appear e.g. in the resistance to GI tract conditions and in health promoting benefits (Madsen, 2006). The strain evaluation based on individual strain specific probiotic characteristics is therefore the first key action for the selection of the new probiotic candidate. The ultimate goal in the selection of the probiotic strain is to provide adequate amounts of active, living cells for the application and to guarantee that the cells are physiologically strong enough to survive and be biologically active in the adverse environmental conditions in the product and in GI tract of the host. The in vivo intervention studies are expensive and time consuming; therefore it is not rational to test all the possible candidates in vivo. Thus, the proper in vitro studies are helping to eliminate strains which are unlikely to perform well in vivo. The aims of this study were to characterize the strains of Propionibacterium freudenreichii ssp. shermanii JS and Lactobacillus rhamnosus LC705, both used for decades as cheese starter cultures, for their technological and possible probiotic functionality applied in a combined culture. The in vitro studies of Propionibacterium freudenreichii ssp. shermanii JS focused on the monitoring of the viability rates during the acid and bile treatments and on the safety aspects such as antibiotic susceptibility and adhesion. The studies with the combination of the strains JS and LC705 administered in fruit juices monitored the survival of the strains JS and LC705 during the GI transit and their effect on gut wellbeing properties measured as relief of constipation. In addition, safety parameters such as side effects and some peripheral immune parameters were assessed. Separately, the combination of P. freudenreichii ssp. shermanii JS and Lactobacillus rhamnosus LC705 was evaluated from the technological point of view as a bioprotective culture in fermented foods and wheat bread applications. In this study, the role ofP. freudenreichii ssp. shermanii JS as a candidate probiotic culture alone and in a combination with L. rhamnosus LC705 was demonstrated. Both strains were transiently recovered in high numbers in fecal samples of healthy adults during the consumption period. The good survival through the GI transit was proven for both strains with a recovery rate from 70 to 80% for the JS strain and from 40 to 60% for the LC705 strain from the daily dose of 10 log10 CFU. The good survival was shown from the consumption of fruit juices which do not provide similar matrix protection for the cells as milk based products. The strain JS did not pose

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is a chronic process that evolves over decades and may culminate in myocardial infarction (MI). While invasive coronary angiography (ICA) is still considered the gold standard of imaging CAD, non-invasive assessment of both the vascular anatomy and myocardial perfusion has become an intriguing alternative. In particular, computed tomography (CT) and positron emission tomography (PET) form an attractive combination for such studies. Increased radiation dose is, however, a concern. Our aim in the current thesis was to test novel CT and PET techniques alone and in hybrid setting in the detection and assessment of CAD in clinical patients. Along with diagnostic accuracy, methods for the reduction of the radiation dose was an important target. The study investigating the coronary arteries of patients with atrial fibrillation (AF) showed that CAD may be an important etiology of AF because a high prevalence of CAD was demonstrated within AF patients. In patients with suspected CAD, we demonstrated that a sequential, prospectively ECG-triggered CT technique was applicable to nearly 9/10 clinical patients and the radiation dose was over 60% lower than with spiral CT. To detect the functional significance of obstructive CAD, a novel software for perfusion quantification, CarimasTM, showed high reproducibility with 15O-labelled water in PET, supporting feasibility and good clinical accuracy. In a larger cohort of 107 patients with moderate 30-70% pre-test probability of CAD, hybrid PET/CT was shown to be a powerful diagnostic method in the assessment of CAD with diagnostic accuracy comparable to that of invasive angiography and fractional flow reserve (FFR) measurements. A hybrid study may be performed with a reasonable radiation dose in a vast majority of the cases, improving the performance of stand-alone PET and CT angiography, particularly when the absolute quantification of the perfusion is employed. These results can be applied into clinical practice and will be useful for daily clinical diagnosis of CAD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis focuses on integration in project business, i.e. how projectbased companies organize their product and process structures when they deliver industrial solutions to their customers. The customers that invest in these solutions run their businesses in different geographical, political and economical environments, which should be acknowledged by the supplier when providing solutions comprising of larger and more complex scopes than previously supplied to these customers. This means that the suppliers are increasing their supply range by taking over some of the activities in the value chain that have traditionally been handled by the customer. In order to be able to provide the functioning solutions, including more engineering hours, technical equipment and a wider project network, a change is needed in the mindset in order to be able to carry out and take the required responsibility that these new approaches bring. For the supplier it is important to be able to integrate technical products, systems and services, but the supplier also needs to have the capabilities to integrate the cross-functional organizations and departments in the project network, the knowledge and information between and within these organizations and departments, along with inputs from the customer into the product and process structures during the lifecycle of the project under development. Hence, the main objective of this thesis is to explore the challenges of integration that industrial projects meet, and based on that, to suggest a concept of how to manage integration in project business by making use of integration mechanisms. Integration is considered the essential process for accomplishing an industrial project, whereas the accomplishment of the industrial project is considered to be the result of the integration. The thesis consists of an extended summary and four papers, that are based on three studies in which integration mechanisms for value creation in industrial project networks and the management of integration in project business have been explored. The research is based on an inductive approach where in particular the design, commissioning and operations functions of industrial projects have been studied, addressing entire project life-cycles. The studies have been conducted in the shipbuilding and power generation industries where the scopes of supply consist of stand-alone equipment, equipment and engineering, and turnkey solutions. These industrial solutions include demanding efforts in engineering and organization. Addressing the calls for more studies on the evolving value chains of integrated solutions, mechanisms for inter- and intra-organizational integration and subsequent value creation in project networks have been explored. The research results in thirteen integration mechanisms and a typology for integration is proposed. Managing integration consists of integrating the project network (the supplier and the sub-suppliers) and the customer (the customer’s business purpose, operations environment and the end-user) into the project by making use of integration mechanisms. The findings bring new insight into research on industrial project business by proposing integration of technology and engineering related elements with elements related to customer oriented business performance in contemporary project environments. Thirteen mechanisms for combining products and the processes needed to deliver projects are described and categorized according to the impact that they have on the management of knowledge and information. These mechanisms directly relate to the performance of the supplier, and consequently to the functioning of the solution that the project provides. This thesis offers ways to promote integration of knowledge and information during the lifecycle of industrial projects, enhancing the development towards innovative solutions in project business.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communications play a key role in modern smart grids. New functionalities that make the grids ‘smart’ require the communication network to function properly. Data transmission between intelligent electric devices (IEDs) in the rectifier and the customer-end inverters (CEIs) used for power conversion is also required in the smart grid concept of the low-voltage direct current (LVDC) distribution network. Smart grid applications, such as smart metering, demand side management (DSM), and grid protection applied with communications are all installed in the LVDC system. Thus, besides remote connection to the databases of the grid operators, a local communication network in the LVDC network is needed. One solution applied to implement the communication medium in power distribution grids is power line communication (PLC). There are power cables in the distribution grids, and hence, they may be applied as a communication channel for the distribution-level data. This doctoral thesis proposes an IP-based high-frequency (HF) band PLC data transmission concept for the LVDC network. A general method to implement the Ethernet-based PLC concept between the public distribution rectifier and the customerend inverters in the LVDC grid is introduced. Low-voltage cables are studied as the communication channel in the frequency band of 100 kHz–30 MHz. The communication channel characteristics and the noise in the channel are described. All individual components in the channel are presented in detail, and a channel model, comprising models for each channel component is developed and verified by measurements. The channel noise is also studied by measurements. Theoretical signalto- noise ratio (SNR) and channel capacity analyses and practical data transmission tests are carried out to evaluate the applicability of the PLC concept against the requirements set by the smart grid applications in the LVDC system. The main results concerning the applicability of the PLC concept and its limitations are presented, and suggestion for future research proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrokinetics has emerged as a potential technique for in situ soil remediation and especially unique because of the ability to work in low permeability soil. In electrokinetic remediation, non-polar contaminants like most organic compounds are transported primarily by electroosmosis, thus the process is effective only if the contaminants are soluble in pore fluid. Therefore, enhancement is needed to improve mobility of these hydrophobic compounds, which tend to adsorb strongly to the soil. On the other hand, as a novel and rapidly growing science, the applications of ultrasound in environmental technology hold a promising future. Compared to conventional methods, ultrasonication can bring several benefits such as environmental friendliness (no toxic chemical are used or produced), low cost, and compact instrumentation. It also can be applied onsite. Ultrasonic energy applied into contaminated soils can increase desorption and mobilization of contaminants and porosity and permeability of soil through developing of cavitation. The research investigated the coupling effect of the combination of these two techniques, electrokinetics and ultrasonication, in persistent organic pollutant removal from contaminated low permeability clayey soil (with kaolin as a model medium). The preliminary study checked feasibility of ultrasonic treatment of kaolin highly contaminated by persistent organic pollutants (POPs). The laboratory experiments were conducted in various conditions (moisture, frequency, power, duration time, initial concentration) to examine the effects of these parameters on the treatment process. Experimental results showed that ultrasonication has a potential to remove POPs, although the removal efficiencies were not high with short duration time. The study also suggested intermittent ultrasonication over longer time as an effective means to increase the removal efficiencies. Then, experiments were conducted to compare the performances among electrokinetic process alone and electrokinetic processes combined with surfactant addition and mainly with ultrasonication, in designed cylinders (with filtercloth separating central part and electrolyte parts) and in open pans. Combined electrokinetic and ultrasonic treatment did prove positive coupling effect compared to each single process alone, though the level of enhancement is not very significant. The assistance of ultrasound in electrokinetic remediation can help reduce POPs from clayey soil by improving the mobility of hydrophobic organic compounds and degrading these contaminants through pyrolysis and oxidation. Ultrasonication also sustains higher current and increases electroosmotic flow. Initial contaminant concentration is an essential input parameter that can affect the removal effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To determine the effects of combined therapy of gliclazide and bedtime insulin on glycemic control and C-peptide secretion, we studied 25 patients with type 2 diabetes and sulfonylurea secondary failure, aged 56.8 ± 8.3 years, with a duration of diabetes of 10.6 ± 6.6 years, fasting plasma glucose of 277.3 ± 64.6 mg/dl and a body mass index of 27.4 ± 4.8 kg/m². Patients were submitted to three therapeutic regimens lasting 2 months each: 320 mg gliclazide (phase 1), 320 mg gliclazide and bedtime NPH insulin (phase 2), and insulin (phase 3). At the end of each period, glycemic and C-peptide curves in response to a mixed meal were determined. During combined therapy, there was a decrease in all glycemic curve values (P<0.01). Twelve patients (48%) reached fasting plasma glucose <140 mg/dl with a significant weight gain of 64.8 kg (43.1-98.8) vs 66.7 kg (42.8-101.4) (P<0.05), with no increase in C-peptide secretion or decrease in HbA1. C-Peptide glucose score (C-peptide/glucose x 100) increased from 0.9 (0.2-2.1) to 1.3 (0.2-4.7) during combined therapy (P<0.01). Despite a 50% increase in insulin doses in phase 3 (12 U (9-30) vs 18 U (11-60); P<0.01) only 3 patients who responded to combined therapy maintained fasting plasma glucose <140 mg/dl (P<0.02). A tendency to a higher absolute increase in C-peptide (0.99 (0.15-2.5) vs 0.6 (0-2.15); P = 0.08) and C-peptide incremental area (2.47 (0.22-6.2) vs 1.2 (0-3.35); P = 0.07) was observed among responders. We conclude that combined therapy resulted in a better glucose response to a mixed meal than insulin alone and should be tried in type 2 diabetic patients before starting insulin monotherapy, despite difficulties in predicting the response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I will argue that the doctrine of eternal recurrence of the same no better interprets cosmology than pink elephants interpret zoology. I will also argue that the eternal-reiurn-of-the-same doctrine as what Magnus calls "existential imperative" is without possibility of application and thus futile. To facilitate those arguments, the validity of the doctrine of the eternal recurrence of the same will be tested under distinct rubrics. Although each rubric will stand alone, one per chapter, as an evaluation of some specific aspect of eternal recurrence, the rubric sequence has been selected to accommodate the identification of what I shall be calling logic abridgments. The conclusions to be extracted from each rubric are grouped under the heading CONCLUSION and appear immediately following rubric ten. Then, or if, at the end of a rubric a reader is inclined to wonder which rubric or topic is next, and why, the answer can be found at the top of the following page. The question is usually answered in the very first sentence, but always answered in the first paragraph. The first rubric has been placed in order by chronological entitlement in that it deals with the evolution of the idea of eternal recurrence from the time of the ancient Greeks to Nietzsche's August, 1881 inspiration. This much-recommended technique is also known as starting at the beginning. Rubric 1 also deals with 20th. Century philosophers' assessments of the relationship between Nietzsche and ancient Greek thought. The only experience of E-R, Zarathustra's mountain vision, is second only because it sets the scene alluded to in following rubrics. The third rubric explores .ii?.ih T jc,i -I'w Nietzsche's evaluation of rationality so that his thought processes will be understood appropriately. The actual mechanism of E-R is tested in rubric four...The scientific proof Nietzsche assembled in support of E-R is assessed by contemporary philosophers in rubric five. E-R's function as an ethical imperative is debated in rubrics six and seven.. .The extent to which E-R fulfills its purpose in overcoming nihilism is measured against the comfort assured by major world religions in rubric eight. Whether E-R also serves as a redemption for revenge is questioned in rubric nine. Rubric ten assures that E-R refers to return of the identically same and not merely the similar. In addition to assemblage and evaluation of all ten rubrics, at the end of each rubric a brief recapitulation of its principal points concludes the chapter. In this essay I will assess the theoretical conditions under which the doctrine cannot be applicable and will show what contradictions and inconsistencies follow if the doctrine is taken to be operable. Harold Alderman in his book Nietzsche's Gift wrote, the "doctrine of eternal recurrence gives us a problem not in Platonic cosmology, but in Socratic selfreflection." ^ I will illustrate that the recurrence doctrine's cosmogony is unworkable and that if it were workable, it would negate self-reflection on the grounds that selfreflection cannot find its cause in eternal recurrence of the same. Thus, when the cosmology is shown to be impossible, any expected ensuing results or benefits will be rendered also impossible. The so-called "heaviest burden" will be exposed as complex, engrossing "what if speculations deserving no linkings to reality. To identify ^Alderman p. 84 abridgments of logic, contradictions and inconsistencies in Nietzsche's doctrine of eternal recurrence of the same, I. will examine the subject under the following schedule. In Chapter 1 the ancient origins of recurrence theories will be introduced. ..This chapter is intended to establish the boundaries within which the subsequent chapters, except Chapter 10, will be confined. Chapter 2, Zarathustra's vision of E-R, assesses the sections of Thus Spoke Zarathustra in which the phenomenon of recurrence of the same is reported. ..Nihilism as a psychological difficulty is introduced in this rubric, but that subject will be studied in detail in Chapter 8. In Chapter 2 the symbols of eternal recurrence of the same will be considered. Whether the recurrence image should be of a closed ring or as a coil will be of significance in many sections of my essay. I will argue that neither symbolic configuration can accommodate Nietzsche's supposed intention. Chapter 3 defends the description of E-R given by Zarathustra. Chapter 4, the cosmological mechanics of E-R, speculates on the seriousness with which Nietzsche might have intended the doctrine of eternal recurrence to be taken. My essay reports, and then assesses, the argument of those who suppose the doctrine to have been merely exploratory musings by Nietzsche on cosmological hypotheses...The cosmogony of E-R is examined. In Chapter 5, cosmological proofs tested, the proofs for Nietzsche's doctrine of return of the same are evaluated. This chapter features the position taken by Martin ' Heidegger. My essay suggests that while Heidegger's argument that recurrence of the same is a genuine cosmic agenda is admirable, it is not at all persuasive. Chapter 6, E-R is an ethical imperative, is in essence the reporting of a debate between two scholars regarding the possibility of an imperative in the doctrine of recurrence. Their debate polarizes the arguments I intend to develop. Chapter 7, does E-R of the same preclude alteration of attitudes, is a continuation of the debate presented in Chapter 6 with the focus shifted to the psychological from the cosmological aspects of eternal recurrence of the same. Chapter 8, Can E-R Overcome Nihilism?, is divided into two parts. In the first, nihilism as it applies to Nietzsche's theory is discussed. ..In part 2, the broader consequences, sources and definitions of nihilism are outlined. My essay argues that Nietzsche's doctrine is more nihilistic than are the world's major religions. Chapter 9, Is E-R a redemption for revenge?, examines the suggestion extracted from Thus Spoke Zarathustra that the doctrine of eternal recurrence is intended, among other purposes, as a redemption for mankind from the destructiveness of revenge. Chapter 10, E-R of the similar refuted, analyses a position that an element of chance can influence the doctrine of recurrence. This view appears to allow, not for recurrence of the same, but recurrence of the similar. A summary will recount briefly the various significant logic abridgments, contradictions, and inconsistencies associated with Nietzsche's doctrine of eternal recurrence of the same. In the 'conclusion' section of my essay my own opinions and observations will be assembled from the body of the essay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This case study traces the evolution of library assignments for biological science students from paper-based workbooks in a blended (hands-on) workshop to blended learning workshops using online assignments to online active learning modules which are stand-alone without any face-to-face instruction. As the assignments evolved to adapt to online learning supporting materials in the form of PDFs (portable document format), screen captures and screencasting were embedded into the questions as teaching moments to replace face-to-face instruction. Many aspects of the evolution of the assignment were based on student feedback from evaluations, input from senior lab demonstrators and teaching assistants, and statistical analysis of the students’ performance on the assignment. Advantages and disadvantages of paper-based and online assignments are discussed. An important factor for successful online learning may be the ability to get assistance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose two axiomatic theories of cost sharing with the common premise that agents demand comparable -though perhaps different- commodities and are responsible for their own demand. Under partial responsibility the agents are not responsible for the asymmetries of the cost function: two agents consuming the same amount of output always pay the same price; this holds true under full responsibility only if the cost function is symmetric in all individual demands. If the cost function is additively separable, each agent pays her stand alone cost under full responsibility; this holds true under partial responsibility only if, in addition, the cost function is symmetric. By generalizing Moulin and Shenker’s (1999) Distributivity axiom to cost-sharing methods for heterogeneous goods, we identify in each of our two theories a different serial method. The subsidy-free serial method (Moulin, 1995) is essentially the only distributive method meeting Ranking and Dummy. The cross-subsidizing serial method (Sprumont, 1998) is the only distributive method satisfying Separability and Strong Ranking. Finally, we propose an alternative characterization of the latter method based on a strengthening of Distributivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une compréhension approfondie et un meilleur contrôle de l'auto-assemblage des copolymères diblocs (séquencés) et de leurs complexes à l'interface air/eau permettent la formation contrôlée de nanostructures dont les propriétés sont connues comme alternative à la nanolithographie. Dans cette thèse, des monocouches obtenues par les techniques de Langmuir et de Langmuir-Blodgett (LB) avec le copolymère dibloc polystyrène-poly(4-vinyl pyridine) (PS-PVP), seul ou complexé avec de petites molécules par liaison hydrogène [en particulier, le 3-n-pentadécylphénol (PDP)], ont été étudiées. Une partie importante de notre recherche a été consacrée à l'étude d'une monocouche assemblée atypique baptisée réseau de nanostries. Des monocouches LB composées de nanostries ont déjà été rapportées dans la littérature mais elles coexistent souvent avec d'autres morphologies, ce qui les rend inutilisables pour des applications potentielles. Nous avons déterminé les paramètres moléculaires et les conditions expérimentales qui contrôlent cette morphologie, la rendant très reproductible. Nous avons aussi proposé un mécanisme original pour la formation de cette morphologie. De plus, nous avons montré que l'utilisation de solvants à haut point d’ébullition, non couramment utilisés pour la préparation des films Langmuir, peut améliorer l'ordre des nanostries. En étudiant une large gamme de PS-PVP avec des rapports PS/PVP et des masses molaires différents, avec ou sans la présence de PDP, nous avons établi la dépendance des types principaux de morphologie (planaire, stries, nodules) en fonction de la composition et de la concentration des solutions. Ces observations ont mené à une discussion sur les mécanismes de formation des morphologies, incluant la cinétique, l’assemblage moléculaire et l’effet du démouillage. Nous avons aussi démontré pour la première fois que le plateau dans l'isotherme des PS-PVP/PDP avec morphologie de type nodules est relié à une transition ordre-ordre des nodules (héxagonal-tétragonal) qui se produit simultanément avec la réorientation du PDP, les deux aspects étant clairement observés par AFM. Ces études ouvrent aussi la voie à l'utilisation de films PS-PVP/PDP ultraminces comme masque. La capacité de produire des films nanostructurés bien contrôlés sur différents substrats a été démontrée et la stabilité des films a été vérifiée. Le retrait de la petite molécule des nanostructures a fait apparaître une structure interne à explorer lors d’études futures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the fastest expanding areas of computer exploitation is in embedded systems, whose prime function is not that of computing, but which nevertheless require information processing in order to carry out their prime function. Advances in hardware technology have made multi microprocessor systems a viable alternative to uniprocessor systems in many embedded application areas. This thesis reports the results of investigations carried out on multi microprocessors oriented towards embedded applications, with a view to enhancing throughput and reliability. An ideal controller for multiprocessor operation is developed which would smoothen sharing of routines and enable more powerful and efficient code I data interchange. Results of performance evaluation are appended.A typical application scenario is presented, which calls for classifying tasks based on characteristic features that were identified. The different classes are introduced along with a partitioned storage scheme. Theoretical analysis is also given. A review of schemes available for reducing disc access time is carried out and a new scheme presented. This is found to speed up data base transactions in embedded systems. The significance of software maintenance and adaptation in such applications is highlighted. A novel scheme of prov1d1ng a maintenance folio to system firmware is presented, alongwith experimental results. Processing reliability can be enhanced if facility exists to check if a particular instruction in a stream is appropriate. Likelihood of occurrence of a particular instruction would be more prudent if number of instructions in the set is less. A new organisation is derived to form the basement for further work. Some early results that would help steer the course of the work are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Present work deals with the Preparation and characterization of high-k aluminum oxide thin films by atomic layer deposition for gate dielectric applications.The ever-increasing demand for functionality and speed for semiconductor applications requires enhanced performance, which is achieved by the continuous miniaturization of CMOS dimensions. Because of this miniaturization, several parameters, such as the dielectric thickness, come within reach of their physical limit. As the required oxide thickness approaches the sub- l nm range, SiO 2 become unsuitable as a gate dielectric because its limited physical thickness results in excessive leakage current through the gate stack, affecting the long-term reliability of the device. This leakage issue is solved in the 45 mn technology node by the integration of high-k based gate dielectrics, as their higher k-value allows a physically thicker layer while targeting the same capacitance and Equivalent Oxide Thickness (EOT). Moreover, Intel announced that Atomic Layer Deposition (ALD) would be applied to grow these materials on the Si substrate. ALD is based on the sequential use of self-limiting surface reactions of a metallic and oxidizing precursor. This self-limiting feature allows control of material growth and properties at the atomic level, which makes ALD well-suited for the deposition of highly uniform and conformal layers in CMOS devices, even if these have challenging 3D topologies with high aspect-ratios. ALD has currently acquired the status of state-of-the-art and most preferred deposition technique, for producing nano layers of various materials of technological importance. This technique can be adapted to different situations where precision in thickness and perfection in structures are required, especially in the microelectronic scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nanocomposite is a multiphase solid material where one of the phases has one, two or three dimensions of less than 100 nanometers (nm), or structures having nano-scale repeat distances between the different phases that make up the material. In the broadest sense this definition can include porous media, colloids, gels and copolymers, but is more usually taken to mean the solid combination of a bulk matrix and nano-dimensional phase(s) differing in properties due to dissimilarities in structure and chemistry. The mechanical, electrical, thermal, optical, electrochemical, catalytic properties of the nanocomposite will differ markedly from that of the component materials. Size limits for these effects have been proposed, <5 nm for catalytic activity, <20 nm for making a hard magnetic material soft, <50 nm for refractive index changes, and <100 nm for achieving superparamagnetism, mechanical strengthening or restricting matrix dislocation movement. Conducting polymers have attracted much attention due to high electrical conductivity, ease of preparation, good environmental stability and wide variety of applications in light-emitting, biosensor chemical sensor, separation membrane and electronic devices. The most widely studied conducting polymers are polypyrrole, polyaniline, polythiophene etc. Conducting polymers provide tremendous scope for tuning of their electrical conductivity from semiconducting to metallic region by way of doping and are organic electro chromic materials with chemically active surface. But they are chemically very sensitive and have poor mechanical properties and thus possessing a processibility problem. Nanomaterial shows the presence of more sites for surface reactivity, they possess good mechanical properties and good dispersant too. Thus nanocomposites formed by combining conducting polymers and inorganic oxide nanoparticles possess the good properties of both the constituents and thus enhanced their utility. The properties of such type of nanocomposite are strongly depending on concentration of nanomaterials to be added. Conducting polymer composites is some suitable composition of a conducting polymer with one or more inorganic nanoparticles so that their desirable properties are combined successfully. The composites of core shell metal oxide particles-conducting polymer combine the electrical properties of the polymer shell and the magnetic, optical, electrical or catalytic characteristics of the metal oxide core, which could greatly widen their applicability in the fields of catalysis, electronics and optics. Moreover nanocomposite material composed of conducting polymers & oxides have open more field of application such as drug delivery, conductive paints, rechargeable batteries, toners in photocopying, smart windows, etc.The present work is mainly focussed on the synthesis, characterization and various application studies of conducting polymer modified TiO2 nanocomposites. The conclusions of the present work are outlined below, Mesoporous TiO2 was prepared by the cationic surfactant P123 assisted hydrothermal synthesis route and conducting polymer modified TiO2 nanocomposites were also prepared via the same technique. All the prepared systems show XRD pattern corresponding to anatase phase of TiO2, which means that there is no phase change occurring even after conducting polymer modification. Raman spectroscopy gives supporting evidence for the XRD results. It also confirms the incorporation of the polymer. The mesoporous nature and surface area of the prepared samples were analysed by N2 adsorption desorption studies and the mesoporous ordering can be confirmed by low angle XRD measurementThe morphology of the prepared samples was obtained from both SEM & TEM. The elemental analysis of the samples was performed by EDX analysisThe hybrid composite formation is confirmed by FT-IR spectroscopy and X-ray photoelectron spectroscopyAll the prepared samples have been used for the photocatalytic degradation of dyes, antibiotic, endocrine disruptors and some other organic pollutants. Photocatalytic antibacterial activity studies were also performed using the prepared systemsAll the prepared samples have been used for the photocatalytic degradation of dyes, antibiotic, endocrine disruptors and some other organic pollutants. Photocatalytic antibacterial activity studies were also performed using the prepared systems Polyaniline modified TiO2 nanocomposite systems were found to have good antibacterial activity. Thermal diffusivity studies of the polyaniline modified systems were carried out using thermal lens technique. It is observed that as the amount of polyaniline in the composite increases the thermal diffusivity also increases. The prepared systems can be used as an excellent coolant in various industrial purposes. Nonlinear optical properties (3rd order nonlinearity) of the polyaniline modified systems were studied using Z scan technique. The prepared materials can be used for optical limiting Applications. Lasing studies of polyaniline modified TiO2 systems were carried out and the studies reveal that TiO2 - Polyaniline composite is a potential dye laser gain medium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------