890 resultados para Non-autonomous Schr odinger-Poisson systems
Resumo:
This paper presents a study on the dynamics of the rattling problem in gearboxes under non-ideal excitation. The subject has being analyzed by a number of authors such as Karagiannis and Pfeiffer (1991), for the ideal excitation case. An interesting model of the same problem by Moon (1992) has been recently used by Souza and Caldas (1999) to detect chaotic behavior. We consider two spur gears with different diameters and gaps between the teeth. Suppose the motion of one gear to be given while the motion of the other is governed by its dynamics. In the ideal case, the driving wheel is supposed to undergo a sinusoidal motion with given constant amplitude and frequency. In this paper, we consider the motion to be a function of the system response and a limited energy source is adopted. Thus an extra degree of freedom is introduced in the problem. The equations of motion are obtained via a Lagrangian approach with some assumed characteristic torque curves. Next, extensive numerical integration is used to detect some interesting geometrical aspects of regular and irregular motions of the system response.
Resumo:
In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.
Resumo:
The three alpha2-adrenoceptor (alpha2-AR) subtypes belong to the G protein-coupled receptor superfamily and represent potential drug targets. These receptors have many vital physiological functions, but their actions are complex and often oppose each other. Current research is therefore driven towards discovering drugs that selectively interact with a specific subtype. Cell model systems can be used to evaluate a chemical compound's activity in complex biological systems. The aim of this thesis was to optimize and validate cell-based model systems and assays to investigate alpha2-ARs as drug targets. The use of immortalized cell lines as model systems is firmly established but poses several problems, since the protein of interest is expressed in a foreign environment, and thus essential components of receptor regulation or signaling cascades might be missing. Careful cell model validation is thus required; this was exemplified by three different approaches. In cells heterologously expressing alpha2A-ARs, it was noted that the transfection technique affected the test outcome; false negative adenylyl cyclase test results were produced unless a cell population expressing receptors in a homogenous fashion was used. Recombinant alpha2C-ARs in non-neuronal cells were retained inside the cells, and not expressed in the cell membrane, complicating investigation of this receptor subtype. Receptor expression enhancing proteins (REEPs) were found to be neuronalspecific adapter proteins that regulate the processing of the alpha2C-AR, resulting in an increased level of total receptor expression. Current trends call for the use of primary cells endogenously expressing the receptor of interest; therefore, primary human vascular smooth muscle cells (SMC) expressing alpha2-ARs were tested in a functional assay monitoring contractility with a myosin light chain phosphorylation assay. However, these cells were not compatible with this assay due to the loss of differentiation. A rat aortic SMC cell line transfected to express the human alpha2B-AR was adapted for the assay, and it was found that the alpha2-AR agonist, dexmedetomidine, evoked myosin light chain phosphorylation in this model.
Resumo:
Agile methods have become increasingly popular in the field of software engineering. While agile methods are now generally considered applicable to software projects of many different kinds, they have not been widely adopted in embedded systems development. This is partly due to the natural constraints that are present in embedded systems development (e.g. hardware–software interdependencies) that challenge the utilization of agile values, principles and practices. The research in agile embedded systems development has been very limited, and this thesis tackles an even less researched theme related to it: the suitability of different project management tools in agile embedded systems development. The thesis covers the basic aspects of many different agile tool types from physical tools, such as task boards and cards, to web-based agile tools that offer all-round solutions for application lifecycle management. In addition to these two extremities, there is also a wide range of lighter agile tools that focus on the core agile practices, such as backlog management. Also other non-agile tools, such as bug trackers, can be used to support agile development, for instance, with plug-ins. To investigate the special tool requirements in agile embedded development, the author observed tool related issues and solutions in a case study involving three different companies operating in the field of embedded systems development. All three companies had a distinct situation in the beginning of the case and thus the tool solutions varied from a backlog spreadsheet built from scratch to plug-in development for an already existing agile software tool. Detailed reports are presented of all three tool cases. Based on the knowledge gathered from agile tools and the case study experiences, it is concluded that there are tool related issues in the pilot phase, such as backlog management and user motivation. These can be overcome in various ways epending on the type of a team in question. Finally, five principles are formed to give guidelines for tool selection and usage in agile embedded systems development.
Influence of surface functionalization on the behavior of silica nanoparticles in biological systems
Resumo:
Personalized nanomedicine has been shown to provide advantages over traditional clinical imaging, diagnosis, and conventional medical treatment. Using nanoparticles can enhance and clarify the clinical targeting and imaging, and lead them exactly to the place in the body that is the goal of treatment. At the same time, one can reduce the side effects that usually occur in the parts of the body that are not targets for treatment. Nanoparticles are of a size that can penetrate into cells. Their surface functionalization offers a way to increase their sensitivity when detecting target molecules. In addition, it increases the potential for flexibility in particle design, their therapeutic function, and variation possibilities in diagnostics. Mesoporous nanoparticles of amorphous silica have attractive physical and chemical characteristics such as particle morphology, controllable pore size, and high surface area and pore volume. Additionally, the surface functionalization of silica nanoparticles is relatively straightforward, which enables optimization of the interaction between the particles and the biological system. The main goal of this study was to prepare traceable and targetable silica nanoparticles for medical applications with a special focus on particle dispersion stability, biocompatibility, and targeting capabilities. Nanoparticle properties are highly particle-size dependent and a good dispersion stability is a prerequisite for active therapeutic and diagnostic agents. In the study it was shown that traceable streptavidin-conjugated silica nanoparticles which exhibit a good dispersibility could be obtained by the suitable choice of a proper surface functionalization route. Theranostic nanoparticles should exhibit sufficient hydrolytic stability to effectively carry the medicine to the target cells after which they should disintegrate and dissolve. Furthermore, the surface groups should stay at the particle surface until the particle has been internalized by the cell in order to optimize cell specificity. Model particles with fluorescently-labeled regions were tested in vitro using light microscopy and image processing technology, which allowed a detailed study of the disintegration and dissolution process. The study showed that nanoparticles degrade more slowly outside, as compared to inside the cell. The main advantage of theranostic agents is their successful targeting in vitro and in vivo. Non-porous nanoparticles using monoclonal antibodies as guiding ligands were tested in vitro in order to follow their targeting ability and internalization. In addition to the targeting that was found successful, a specific internalization route for the particles could be detected. In the last part of the study, the objective was to clarify the feasibility of traceable mesoporous silica nanoparticles, loaded with a hydrophobic cancer drug, being applied for targeted drug delivery in vitro and in vivo. Particles were provided with a small molecular targeting ligand. In the study a significantly higher therapeutic effect could be achieved with nanoparticles compared to free drug. The nanoparticles were biocompatible and stayed in the tumor for a longer time than a free medicine did, before being eliminated by renal excretion. Overall, the results showed that mesoporous silica nanoparticles are biocompatible, biodegradable drug carriers and that cell specificity can be achieved both in vitro and in vivo.
Resumo:
This study describes and illustrates non-heterocytous filamentous cyanobacteria found in lagoon systems on the coastal plains of Rio Grande do Sul State. Collections were carried out in different freshwater bodies along the eastern (Casamento Lake area) and western (Tapes City area) margins of the Patos Lagoon (UTM 461948-6595095 and 542910-6645535) using a plankton net (25 µm mesh) in pelagic and littoral zones as well as by squeezing submerged parts of aquatic macrophytes, during both the rainy and dry seasons, from May to December/2003. Twenty two species belonging to the families Phormidiaceae (eight taxa), Pseudanabaenaceae (seven taxa), Oscillatoriaceae (six taxa), and Spirulinaceae (one taxon) were identified. Among these species, five are reported for the first time from Rio Grande do Sul State: Leptolyngbya cebennensis, Microcoleus subtorulosus, Oscillatoria cf. anguina, O. curviceps and Phormidium formosum.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Quantum computation and quantum communication are two of the most promising future applications of quantum mechanics. Since the information carriers used in both of them are essentially open quantum systems it is necessary to understand both quantum information theory and the theory of open quantum systems in order to investigate realistic implementations of such quantum technologies. In this thesis we consider the theory of open quantum systems from a quantum information theory perspective. The thesis is divided into two parts: review of the literature and original research. In the review of literature we present some important definitions and known results of open quantum systems and quantum information theory. We present the definitions of trace distance, two channel capacities and superdense coding capacity and give a reasoning why they can be used to represent the transmission efficiency of a communication channel. We also show derivations of some properties useful to link completely positive and trace preserving maps to trace distance and channel capacities. With the help of these properties we construct three measures of non-Markovianity and explain why they detect non-Markovianity. In the original research part of the thesis we study the non-Markovian dynamics in an experimentally realized quantum optical set-up. For general one-qubit dephasing channels we calculate the explicit forms of the two channel capacities and the superdense coding capacity. For the general two-qubit dephasing channel with uncorrelated local noises we calculate the explicit forms of the quantum capacity and the mutual information of a four-letter encoding. By using the dynamics in the experimental implementation as a set of specific dephasing channels we also calculate and compare the measures in one- and two-qubit dephasing channels and study the options of manipulating the environment to achieve revivals and higher transmission rates in superdense coding protocol with dephasing noise. Kvanttilaskenta ja kvanttikommunikaatio ovat kaksi puhutuimmista tulevaisuuden kvanttimekaniikan käytännön sovelluksista. Koska molemmissa näistä informaatio koodataan systeemeihin, jotka ovat oleellisesti avoimia kvanttisysteemejä, sekä kvantti-informaatioteorian, että avointen kvanttisysteemien tuntemus on välttämätöntä. Tässä tutkielmassa käsittelemme avointen kvanttisysteemien teoriaa kvantti-informaatioteorian näkökulmasta. Tutkielma on jaettu kahteen osioon: kirjallisuuskatsaukseen ja omaan tutkimukseen. Kirjallisuuskatsauksessa esitämme joitakin avointen kvanttisysteemien ja kvantti-informaatioteorian tärkeitä määritelmiä ja tunnettuja tuloksia. Esitämme jälkietäisyyden, kahden kanavakapasiteetin ja superdense coding -kapasiteetin määritelmät ja esitämme perustelun sille, miksi niitä voidaan käyttää kuvaamaan kommunikointikanavan lähetystehokkuutta. Näytämme myös todistukset kahdelle ominaisuudelle, jotka liittävät täyspositiiviset ja jäljensäilyttävät kuvaukset jälkietäisyyteen ja kanavakapasiteetteihin. Näiden ominaisuuksien avulla konstruoimme kolme epä-Markovisuusmittaa ja perustelemme, miksi ne havaitsevat dynamiikan epä-Markovisuutta. Oman tutkimuksen osiossa tutkimme epä-Markovista dynamiikkaa kokeellisesti toteutetussa kvanttioptisessa mittausjärjestelyssä. Yleisen yhden qubitin dephasing-kanavan tapauksessa laskemme molempien kanavakapasiteettien ja superdense coding -kapasiteetin eksplisiittiset muodot. Yleisen kahden qubitin korreloimattomien ympäristöjen dephasing-kanavan tapauksessa laskemme yhteisen informaation lausekkeen nelikirjaimisessa koodauksessa ja kvanttikanavakapasiteetin. Käyttämällä kokeellisen mittajärjestelyn dynamiikkoja esimerkki dephasing-kanavina me myös laskemme konstruoitujen epä-Markovisuusmittojen arvot ja vertailemme niitä yksi- ja kaksi-qubitti-dephasing-kanavissa. Lisäksi käyttäen kokeellisia esimerkkikanavia tutkimme, kuinka ympäristöä manipuloimalla superdense coding –skeemassa voidaan saada yhteinen informaatio ajoittain kasvamaan tai saavuttaa kaikenkaikkiaan korkeampi lähetystehokkuus.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
Fluid flow behaviour in porous media is a conundrum. Therefore, this research is focused on filtration-volumetric characterisation of fractured-carbonate sediments, coupled with their proper simulation. For this reason, at laboratory rock properties such as pore volume, permeability and porosity are measured, later phase permeabilities and oil recovery in function of flow rate are assessed. Furthermore, the rheological properties of three oils are measured and analysed. Finally based on rock and fluid properties, a model using COMSOL Multiphysics is built in order to compare the experimental and simulated results. The rock analyses show linear relation between flow rate and differential pressure, from which phase permeabilities and pressure gradient are determined, eventually the oil recovery under low and high flow rate is established. In addition, the oils reveal thixotropic properties as well as non-Newtonian behaviour described by Bingham model, consequently Carreau viscosity model for the used oil is given. Given these points, the model for oil and water is built in COMSOL Multiphysics, whereupon successfully the reciprocity between experimental and simulated results is analysed and compared. Finally, a two-phase displacement model is elaborated.
Resumo:
Many-core systems provide a great potential in application performance with the massively parallel structure. Such systems are currently being integrated into most parts of daily life from high-end server farms to desktop systems, laptops and mobile devices. Yet, these systems are facing increasing challenges such as high temperature causing physical damage, high electrical bills both for servers and individual users, unpleasant noise levels due to active cooling and unrealistic battery drainage in mobile devices; factors caused directly by poor energy efficiency. Power management has traditionally been an area of research providing hardware solutions or runtime power management in the operating system in form of frequency governors. Energy awareness in application software is currently non-existent. This means that applications are not involved in the power management decisions, nor does any interface between the applications and the runtime system to provide such facilities exist. Power management in the operating system is therefore performed purely based on indirect implications of software execution, usually referred to as the workload. It often results in over-allocation of resources, hence power waste. This thesis discusses power management strategies in many-core systems in the form of increasing application software awareness of energy efficiency. The presented approach allows meta-data descriptions in the applications and is manifested in two design recommendations: 1) Energy-aware mapping 2) Energy-aware execution which allow the applications to directly influence the power management decisions. The recommendations eliminate over-allocation of resources and increase the energy efficiency of the computing system. Both recommendations are fully supported in a provided interface in combination with a novel power management runtime system called Bricktop. The work presented in this thesis allows both new- and legacy software to execute with the most energy efficient mapping on a many-core CPU and with the most energy efficient performance level. A set of case study examples demonstrate realworld energy savings in a wide range of applications without performance degradation.
Resumo:
The difficulty on identifying, lack of segregation systems and absence of suitable standards for coexistence of non trangenic and transgenic soybean are contributing for contaminations that occur during productive system. The objective of this study was to evaluate the efficiency of two methods for detecting mixtures of seeds genetically modified (GM) into samples of non-GM soybean, in a way that seed lots can be assessed within the standards established by seed legislation. Two sizes of soybean samples (200 and 400 seeds), cv. BRSMG 810C (non-GM) and BRSMG 850GRR (GM), were assessed with four contamination levels (addition of GM seeds, for obtaining 0.0%, 0.5%, 1.0%, and 1.5% contamination), and two detection methods: immunoassay of lateral flux (ILF) and bioassay (pre-imbibition into 0.6% herbicide solution; 25 ºC; 16 h). The bioassay is efficient in detecting presence of GM seeds in seed samples of non-GM soybean, even for contamination lower than 1.0%, provided that seeds have high physiological quality. The ILF was positive, detecting the presence of target protein in contaminated samples, indicating test effectiveness. There was significant correlation between the two detection methods (r = 0.82; p < 0.0001). Sample size did not influence efficiency of the two methods in detecting presence of GM seeds.
Resumo:
Electrochromism, the phenomenon of reversible color change induced by a small electric charge, forms the basis for operation of several devices including mirrors, displays and smart windows. Although, the history of electrochromism dates back to the 19th century, only the last quarter of the 20th century has its considerable scientific and technological impact. The commercial applications of electrochromics (ECs) are rather limited, besides top selling EC anti-glare mirrors by Gentex Corporation and airplane windows by Boeing, which made a huge commercial success and exposed the potential of EC materials for future glass industry. It is evident from their patents that viologens (salts of 4,4ʹ-bipyridilium) were the major active EC component for most of these marketed devices, signifying the motivation of this thesis focusing on EC viologens. Among the family of electrochromes, viologens have been utilized in electrochromic devices (ECDs) for a while, due to its intensely colored radical cation formation induced by applying a small cathodic potential. Viologens can be synthesized as oligomer or in the polymeric form or as functionality to conjugated polymers. In this thesis, polyviologens (PVs) were synthesized starting from cyanopyridinium (CNP) based monomer precursors. Reductive coupling of cross-connected cyano groups yields viologen and polyviologen under successive electropolymerization using for example the cyclic voltammetry (CV) technique. For further development, a polyviologen-graphene composite system was fabricated, focusing at the stability of the PV electrochrome without sacrificing its excellent EC properties. High electrical conductivity, high surface area offered by graphene sheets together with its non-covalent interactions and synergism with PV significantly improved the electrochrome durability in the composite matrix. The work thereby continued in developing a CNP functionalized thiophene derivative and its copolymer for possible utilization of viologen in the copolymer blend. Furthermore, the viologen functionalized thiophene derivative was synthesized and electropolymerized in order to explore enhancement in the EC contrast and overall EC performance. The findings suggest that such electroactive viologen/polyviologen systems and their nanostructured composite films as well as viologen functionalized conjugated polymers, can be potentially applied as an active EC material in future ECDs aiming at durable device performances.