108 resultados para Efficient dominating set
Resumo:
Human embryonic stem cells are pluripotent cells capable of renewing themselves and differentiating to specialized cell types. Because of their unique regenerative potential, pluripotent cells offer new opportunities for disease modeling, development of regenerative therapies, and treating diseases. Before pluripotent cells can be used in any therapeutic applications, there are numerous challenges to overcome. For instance, the key regulators of pluripotency need to be clarified. In addition, long term culture of pluripotent cells is associated with the accumulation of karyotypic abnormalities, which is a concern regarding the safe use of the cells for therapeutic purposes. The goal of the work presented in this thesis was to identify new factors involved in the maintenance of pluripotency, and to further characterize molecular mechanisms of selected candidate genes. Furthermore, we aimed to set up a new method for analyzing genomic integrity of pluripotent cells. The experimental design applied in this study involved a wide range of molecular biology, genome-wide, and computational techniques to study the pluripotency of stem cells and the functions of the target genes. In collaboration with instrument and reagent company Perkin Elmer, KaryoliteTM BoBsTM was implemented for detecting karyotypic changes of pluripotent cells. Novel genes were identified that are highly and specifically expressed in hES cells. Of these genes, L1TD1 and POLR3G were chosen for further investigation. The results revealed that both of these factors are vital for the maintenance of pluripotency and self-renewal of the hESCs. KaryoliteTM BoBsTM was validated as a novel method to detect karyotypic abnormalities in pluripotent stem cells. The results presented in this thesis offer significant new information on the regulatory networks associated with pluripotency. The results will facilitate in understanding developmental and cancer biology, as well as creating stem cell based applications. KaryoliteTM BoBsTM provides rapid, high-throughput, and cost-efficient tool for screening of human pluripotent cell cultures.
Resumo:
Today the limitedness of fossil fuel resources is clearly realized. For this reason there is a strong focus throughout the world on shifting from fossil fuel based energy system to biofuel based energy system. In this respect Finland with its proven excellent forestry capabilities has a great potential to accomplish this goal. It is regarded that one of the most efficient ways of wood biomass utilization is to use it as a feedstock for fast pyrolysis process. By means of this process solid biomass is converted into liquid fuel called bio-oil which can be burnt at power plants, used for hydrogen generation through a catalytic steam reforming process and as a source of valuable chemical compounds. Nowadays different configurations of this process have found their applications in several pilot plants worldwide. However the circulating fluidized bed configuration is regarded as the one with the highest potential to be commercialized. In the current Master’s Thesis a feasibility study of circulating fluidized bed fast pyrolysis process utilizing Scots pine logs as a raw material was conducted. The production capacity of the process is 100 000 tonne/year of bio-oil. The feasibility study is divided into two phases: a process design phase and economic feasibility analysis phase. The process design phase consists of mass and heat balance calculations, equipment sizing, estimation of pressure drops in the pipelines and development of plant layout. This phase resulted in creation of process flow diagrams, equipment list and Microsoft Excel spreadsheet that calculates the process mass and heat balances depending on the bio-oil production capacity which can be set by a user. These documents are presented in the current report as appendices. In the economic feasibility analysis phase there were at first calculated investment and operating costs of the process. Then using these costs there was calculated the price of bio-oil which is required to reach the values of internal rate of return of 5%, 10%, 20%, 30%, 40%, and 50%.
Resumo:
At present, permanent magnet synchronous generators (PMSGs) are of great interest. Since they do not have electrical excitation losses, the highly efficient, lightweight and compact PMSGs equipped with damper windings work perfectly when connected to a network. However, in island operation, the generator (or parallel generators) alone is responsible for the building up of the network and maintaining its voltage and reactive power level. Thus, in island operation, a PMSG faces very tight constraints, which are difficult to meet, because the flux produced by the permanent magnets (PMs) is constant and the voltage of the generator cannot be controlled. Traditional electrically excited synchronous generators (EESGs) can easily meet these constraints, because the field winding current is controllable. The main drawback of the conventional EESG is the relatively high excitation loss. This doctoral thesis presents a study of an alternative solution termed as a hybrid excitation synchronous generator (HESG). HESGs are a special class of electrical machines, where the total rotor current linkage is produced by the simultaneous action of two different excitation sources: the electrical and permanent magnet (PM) excitation. An overview of the existing HESGs is given. Several HESGs are introduced and compared with the conventional EESG from technical and economic points of view. In the study, the armature-reaction-compensated permanent magnet synchronous generator with alternated current linkages (ARC-PMSG with ACL) showed a better performance than the other options. Therefore, this machine type is studied in more detail. An electromagnetic design and a thermal analysis are presented. To verify the operation principle and the electromagnetic design, a down-sized prototype of 69 kVA apparent power was built. The experimental results are demonstrated and compared with the predicted ones. A prerequisite for an ARC-PMSG with ACL is an even number of pole pairs (p = 2, 4, 6, …) in the machine. Naturally, the HESG technology is not limited to even-pole-pair machines. However, the analysis of machines with p = 3, 5, 7, … becomes more complicated, especially if analytical tools are used, and is outside the scope of this thesis. The contribution of this study is to propose a solution where an ARC-PMSG replaces an EESG in electrical power generation while meeting all the requirements set for generators given for instance by ship classification societies, particularly as regards island operation. The maximum power level when applying the technology studied here is mainly limited by the economy of the machine. The larger the machine is, the smaller is the efficiency benefit. However, it seems that machines up to ten megawatts of power could benefit from the technology. However, in low-power applications, for instance in the 500 kW range, the efficiency increase can be significant.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
In recent years, chief information officers (CIOs) around the world have identified Business Intelligence (BI) as their top priority and as the best way to enhance their enterprises competitiveness. Yet, many enterprises are struggling to realize the business value that BI promises. This discrepancy causes important questions, for example: what are the critical success factors of Business Intelligence and, more importantly, how it can be ensured that a Business Intelligence program enhances enterprises competitiveness. The main objective of the study is to find out how it can be ensured that a BI program meets its goals in providing competitive advantage to an enterprise. The objective is approached with a literature review and a qualitative case study. For the literature review the main objective populates three research questions (RQs); RQ1: What is Business Intelligence and why is it important for modern enterprises? RQ2: What are the critical success factors of Business Intelligence programs? RQ3: How it can be ensured that CSFs are met? The qualitative case study covers the BI program of a Finnish global manufacturer company. The research questions for the case study are as follows; RQ4: What is the current state of the case company’s BI program and what are the key areas for improvement? RQ5: In what ways the case company’s Business Intelligence program could be improved? The case company’s BI program is researched using the following methods; action research, semi-structured interviews, maturity assessment and benchmarking. The literature review shows that Business Intelligence is a technology-based information process that contains a series of systematic activities, which are driven by the specific information needs of decision-makers. The objective of BI is to provide accurate, timely, fact-based information, which enables taking actions that lead to achieving competitive advantage. There are many reasons for the importance of Business Intelligence, two of the most important being; 1) It helps to bridge the gap between an enterprise’s current and its desired performance, and 2) It helps enterprises to be in alignment with key performance indicators meaning it helps an enterprise to align towards its key objectives. The literature review also shows that there are known critical success factors (CSFs) for Business Intelligence programs which have to be met if the above mentioned value is wanted to be achieved, for example; committed management support and sponsorship, business-driven development approach and sustainable data quality. The literature review shows that the most common challenges are related to these CSFs and, more importantly, that overcoming these challenges requires a more comprehensive form of BI, called Enterprise Performance Management (EPM). EPM links measurement to strategy by focusing on what is measured and why. The case study shows that many of the challenges faced in the case company’s BI program are related to the above-mentioned CSFs. The main challenges are; lack of support and sponsorship from business, lack of visibility to overall business performance, lack of rigid BI development process, lack of clear purpose for the BI program and poor data quality. To overcome these challenges the case company should define and design an enterprise metrics framework, make sure that BI development requirements are gathered and prioritized by business, focus on data quality and ownership, and finally define clear goals for the BI program and then support and sponsor these goals.
Resumo:
Työssä esitetään kriteeristö tehostamaan energiatehokkuuden toteutumista kaupungin julkisissa rakennushankkeissa. Kriteeristön tarkoituksena on edistää julkisen uudisrakennushankkeen energiatehokkuutta, ottaen huomioon hankkeen kokonaistaloudellisuus. Energiatehokkuutta ja kokonaistaloudellisuutta kunnallisessa rakentamisessa stimuloivat mm. kansainvälisten ja kansallisten sopimusten energiansäästötavoitteet, sekä kuntien tiukka budjettitilanne. Jotta kokonaistaloudellisuustavoitteet täyttyisivät, työssä käytettiin hyväksi elinkaariajattelutapaa. Työssä selvitettiin kirjallisuuden ja ammattilaisten haastattelujen tarjoaman tiedon pohjalta, mitkä näkökohdat vaikuttavat julkisen palvelurakennuksen energiatehokkuuteen sen elinkaaren eri vaiheissa. Hankitun tiedon avulla luotiin kaupungin rakennusinvestointien hankinnasta vastaavien tahojen käyttöön tarkoitettu kriteeristö, joka auttaa tunnistamaan ja valitsemaan energiatehokkuuden ja kokonaistaloudellisuuden kannalta parhaimmat hankkeen toteutustavat. Jatkotoimenpiteenä työssä laadittua kriteeristöä tullaan kehittämään edelleen case-kohteista saatavien kokemusten perusteella. Tutkimus osoitti, että uudisrakentamisen energiataloudellisuuden lisäksi on panostettava energiataloudelliseen korjausrakentamiseen, koska olemassa oleva rakennuskanta muodostaa merkittävän osan kaupungin energiankulutuksesta.
Resumo:
Jätehuollon tavoitetilana Etelä- ja Länsi-Suomessa vuonna 2020 on, että jätemäärä on vähentynyt, hyödyntäminen lisääntynyt ja jätehuolto muuttunut suunnitelmalliseksi. Jätehuoltoa kehitetään yhteistyössä sidosryhmien kanssa. Tässä ensimmäisessä väliarviossa tarkastellaan Etelä- ja Länsi-Suomen jätesuunnitelman sekä Keski-Suomen jätesuunnitelman toteutumista ja vaikuttavuutta. Väliarvio sisältää seurantatiedot jätemääristä ja tietoa toimenpiteistä jätehuollon kehittämiseksi. Tarkastelu kattaa Etelä-Pohjanmaan, Hämeen, Kaakkois-Suomen, Keski-Suomen, Pirkanmaan, Uudenmaan ja Varsinais-Suomen ELY-keskusten toiminta-alueet. Jätesuunnitelmissa vuonna 2009 asetetut tavoitteet ovat edenneet osittain. Painopisteet biohajoavat jätteet sekä yhdyskunta- ja haja-asutuslietteet ovat lähteneet edistymään parhaiten. Haastavimmat teemat, jotka eivät ole edenneet odotetusti, ovat pilaantuneita maita sekä jätehuoltoa poikkeuksellisissa tilanteissa koskevat painopisteet. Lähiaikoina kehittämistä vaativat teemat ovat painopisteet rakentamisen ateriaalitehokkuus sekä tuhkat ja kuonat. Näissä teemoissa on konkreettisia toteutumismahdollisuuksia ja useita kiinnostuneita toimijoita. Uusiomateriaalien kuten tuhkien ja kuonien, teollisuuden sivutuotteiden ja jätemateriaalien käyttöä on mahdollisuus lisätä luonnon kiviainesten sijaan. Rakentamisen materiaalitehokkuutta voidaan ennakoida suunnitelmallisuudella, resurssitehokkuudella ja säästävällä purkamisella. Väliarviossa on esitetty jatkotoimenpiteitä, jotka mahdollistavat jätesuunnitelmien toteutumisen. Asetetuista keskeisistä painopisteistä rakentamisen materiaalitehokkuus sekä tuhkat ja kuonat edellyttävät eniten jatkotoimenpiteitä. Lisäksi yhdyskuntajätteiden kierrätystä tulisi lisätä.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Tämän tutkimuksen tarkoituksena oli tutkia miten Pirkanmaan Osuuskaupan esimiehet ymmärtävät työhyvinvoinnin merkityksen esimiestyössään ja mitä asioita heidän tulisi vielä kehittää työhyvinvointi osaamisessaan. Työkyvyn johtamista verrattiin Elinkeinoelämän keskusliiton luomaan työkyvynjohtamisen malliin. Tutkimuksen pohjalta laadittiin ehdotuksia, jotta esimiesten osaamista, ymmärtämystä ja kykyä hyödyntää työhyvinvoinnin välineitä voitaisiin kehittää ja ylläpitää oikeilla toimenpiteillä. Yrityksissä työhyvinvointiin on alettu kiinnittää entistä enemmän huomiota kiireisen työtahdin, työn vaatimusten sekä työurien mahdollisten pidentymisen vuoksi, jolloin työnantajan lisäksi vaaditaan myös työntekijöiltä panostusta työhyvinvointiin. Tutkimus toteutettiin käyttämällä laadullista menetelmää. Aineiston keruu tehtiin puolistrukturoidun teemahaastattelun avulla. Haastatteluihin valikoitui 4 esimiestä. Haastattelujen avulla kartoitettiin esimiesten tietoja ja näkemyksiä työhyvinvoinnista ja heidän työhyvinvointiosaamisen tasosta. Tutkimuksen avulla selvisi, että esimiesten keskuudessa työhyvinvoinnin ylläpitämiseen ja kehittämiseen panostetaan vähemmän kuin mitä olisi mahdollista. Suurimmiksi esteiksi työhyvinvoinnin toteuttamiselle muodostuivat aika, resurssit ja asenteet. Pirkanmaan Osuuskaupan johtamistyön ja toimintatapojen välillä on eroja verrattuna Elinkeinoelämän keskusliiton työkykyjohtamisen malliin. Esimiestyö on mahdollisuuksia täynnä oleva kilpailuetu, mutta samalla myös huomattava haaste.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The driving forces for current research of flame retardants are increased fire safety in combination with flame retardant formulations that fulfill the criteria of sustainable production and products. In recent years, important questions about the environmental safety of antimony, and in particular, brominated flame retardants have been raised. As a consequence of this, the current doctoral thesis work describes efforts to develop new halogen-free flame retardants that are based on various radical generators and phosphorous compounds. The investigation was first focused on compounds that are capable of generating alkyl radicals in order to study their role on flame retardancy of polypropylene. The family of azoalkanes was selected as the cleanest and most convenient source of free alkyl radicals. Therefore, a number of symmetrical and unsymmetrical azoalkanes of the general formula R-N=N-R’ were prepared. The experimental results show that in the series of different sized azocycloalkanes the flame retardant efficacy decreased in the following order: R = R´= cyclohexyl > cyclopentyl > cyclobutyl > cyclooctanyl > cyclododecanyl. However, in the series of aliphatic azoalkanes compounds, the efficacy decreased as followed: R = R´= n-alkyl > tert-butyl > tert-octyl. The most striking difference in flame retardant efficacy was observed in thick polypropylene plaques of 1 mm, e.g. azocyclohexane (AZO) had a much better flame retardant performance than did the commercial reference FR (Flamestab® NOR116) in thick PP sections. In addition, some of the prepared azoalkane flame retardants e.g. 4’4- bis(cyclohexylazocyclohexyl) methane (BISAZO) exhibited non-burning dripping behavior. Extrusion coating experiments of flame retarded low density polyethylene (LDPE) onto a standard machine finished Kraft paper were carried out in order to investigate the potential of azoalkanes in multilayer facings. The results show that azocyclohexane (AZO) and 4’4-bis (cyclohexylazocyclohexyl) methane (BISAZO) can significantly improve the flame retardant properties of low density polyethylene coated paper already at 0.5 wt.% loadings, provided that the maximum extrusion temperature of 260 oC is not exceeded and coating weight is kept low at 13 g/m2. In addition, various triazene-based flame retardants (RN1=N2-N3R’R’’) were prepared. For example, polypropylene samples containing a very low concentration of only 0.5 wt.% of bis- 4’4’-(3’3’-dimethyltriazene) diphenyl ether and other triazenes passed the DIN 4102-1 test with B2 classification. It is noteworthy that no burning dripping could be detected and the average burning times were very short with exceptionally low weight losses. Therefore, triazene compounds constitute a new and interesting family of radical generators for flame retarding of polymeric materials. The high flame retardant potential of triazenes can be attributed to their ability to generate various types of radicals during their thermal decomposition. According to thermogravimetric analysis/Fourier transform infrared spectroscopy/MS analysis, triazene units are homolytically cleaved into various aminyl, resonance-stabilized aryl radicals, and different CH fragments with simultaneous evolution of elemental nitrogen. Furthermore, the potential of thirteen aliphatic, aromatic, thiuram and heterocyclic substituted organic disulfide derivatives of the general formula R-S-S-R’ as a new group of halogen-free flame retardants for polypropylene films have been investigated. According to the DIN 4102- 1 standard ignitibility test, for the first time it has been demonstrated that many of the disulfides alone can effectively provide flame retardancy and self-extinguishing properties to polypropylene films at already very low concentrations of 0.5 wt.%. For the disulfide family, the highest FR activity was recorded for 5’5’-dithiobis (2-nitrobenzoic acid). Very low values for burning length (53 mm) and burning time (10 s) reflect significantly increased fire retardant performance of this disulfide compared to other compounds in this series as well as to Flamestab® NOR116. Finally, two new, phosphorus-based flame retardants were synthesized: P’P-diphenyl phosphinic hydrazide (PAH) and melamine phenyl phosphonate (MPhP). The DIN 4102-1 test and the more stringent UL94 vertical burning test (UL94 V) were used to assess the formulations ability to extinguish a flame once ignited. A very strong synergistic effect with azoalkanes was found, i.e. in combination with these radical generators even UL94 V0 rate could be obtained.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.