41 resultados para EFFICIENT CATALYST
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
Ribonucleic acid (RNA) has many biological roles in cells: it takes part in coding, decoding, regulating and expressing of the genes as well as has the capacity to work as a catalyst in numerous biological reactions. These qualities make RNA an interesting object of various studies. Development of useful tools with which to investigate RNA is a prerequisite for more advanced research in the field. One of such tools may be the artificial ribonucleases, which are oligonucleotide conjugates that sequence-selectively cleave complementary RNA targets. This thesis is aimed at developing new efficient metal-ion-based artificial ribonucleases. On one hand, to solve the challenges related to solid-supported synthesis of metal-ion-binding conjugates of oligonucleotides, and on the other hand, to quantify their ability to cleave various oligoribonucleotide targets in a pre-designed sequence selective manner. In this study several artificial ribonucleases based on cleaving capability of metal ion chelated azacrown moiety were designed and synthesized successfully. The most efficient ribonucleases were the ones with two azacrowns close to the 3´- end of the oligonucleotide strand. Different transition metal ions were introduced into the azacrown moiety and among them, the Zn2+ ion was found to be better than Cu2+ and Ni2+ ions.
Resumo:
The driving forces for current research of flame retardants are increased fire safety in combination with flame retardant formulations that fulfill the criteria of sustainable production and products. In recent years, important questions about the environmental safety of antimony, and in particular, brominated flame retardants have been raised. As a consequence of this, the current doctoral thesis work describes efforts to develop new halogen-free flame retardants that are based on various radical generators and phosphorous compounds. The investigation was first focused on compounds that are capable of generating alkyl radicals in order to study their role on flame retardancy of polypropylene. The family of azoalkanes was selected as the cleanest and most convenient source of free alkyl radicals. Therefore, a number of symmetrical and unsymmetrical azoalkanes of the general formula R-N=N-R’ were prepared. The experimental results show that in the series of different sized azocycloalkanes the flame retardant efficacy decreased in the following order: R = R´= cyclohexyl > cyclopentyl > cyclobutyl > cyclooctanyl > cyclododecanyl. However, in the series of aliphatic azoalkanes compounds, the efficacy decreased as followed: R = R´= n-alkyl > tert-butyl > tert-octyl. The most striking difference in flame retardant efficacy was observed in thick polypropylene plaques of 1 mm, e.g. azocyclohexane (AZO) had a much better flame retardant performance than did the commercial reference FR (Flamestab® NOR116) in thick PP sections. In addition, some of the prepared azoalkane flame retardants e.g. 4’4- bis(cyclohexylazocyclohexyl) methane (BISAZO) exhibited non-burning dripping behavior. Extrusion coating experiments of flame retarded low density polyethylene (LDPE) onto a standard machine finished Kraft paper were carried out in order to investigate the potential of azoalkanes in multilayer facings. The results show that azocyclohexane (AZO) and 4’4-bis (cyclohexylazocyclohexyl) methane (BISAZO) can significantly improve the flame retardant properties of low density polyethylene coated paper already at 0.5 wt.% loadings, provided that the maximum extrusion temperature of 260 oC is not exceeded and coating weight is kept low at 13 g/m2. In addition, various triazene-based flame retardants (RN1=N2-N3R’R’’) were prepared. For example, polypropylene samples containing a very low concentration of only 0.5 wt.% of bis- 4’4’-(3’3’-dimethyltriazene) diphenyl ether and other triazenes passed the DIN 4102-1 test with B2 classification. It is noteworthy that no burning dripping could be detected and the average burning times were very short with exceptionally low weight losses. Therefore, triazene compounds constitute a new and interesting family of radical generators for flame retarding of polymeric materials. The high flame retardant potential of triazenes can be attributed to their ability to generate various types of radicals during their thermal decomposition. According to thermogravimetric analysis/Fourier transform infrared spectroscopy/MS analysis, triazene units are homolytically cleaved into various aminyl, resonance-stabilized aryl radicals, and different CH fragments with simultaneous evolution of elemental nitrogen. Furthermore, the potential of thirteen aliphatic, aromatic, thiuram and heterocyclic substituted organic disulfide derivatives of the general formula R-S-S-R’ as a new group of halogen-free flame retardants for polypropylene films have been investigated. According to the DIN 4102- 1 standard ignitibility test, for the first time it has been demonstrated that many of the disulfides alone can effectively provide flame retardancy and self-extinguishing properties to polypropylene films at already very low concentrations of 0.5 wt.%. For the disulfide family, the highest FR activity was recorded for 5’5’-dithiobis (2-nitrobenzoic acid). Very low values for burning length (53 mm) and burning time (10 s) reflect significantly increased fire retardant performance of this disulfide compared to other compounds in this series as well as to Flamestab® NOR116. Finally, two new, phosphorus-based flame retardants were synthesized: P’P-diphenyl phosphinic hydrazide (PAH) and melamine phenyl phosphonate (MPhP). The DIN 4102-1 test and the more stringent UL94 vertical burning test (UL94 V) were used to assess the formulations ability to extinguish a flame once ignited. A very strong synergistic effect with azoalkanes was found, i.e. in combination with these radical generators even UL94 V0 rate could be obtained.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Iron is one of the most common elements in the earth’s crust and thus its availability and economic viability far exceed that of metals commonly used in catalysis. Also the toxicity of iron is miniscule, compared to the likes of platinum and nickel, making it very desirable as a catalyst. Despite this, prior to the 21st century, the applicability of iron in catalysis was not thoroughly investigated, as it was considered to be inefficient and unselective in desired transformations. In this doctoral thesis, the application of iron catalysis in combination with organosilicon reagents for transformations of carbonyl compounds has been investigated together with insights into iron catalyzed chlorination of silanes and silanols. In the first part of the thesis, the synthetic application of iron(III)-catalyzed chlorination of silanes (Si-H) and the monochlorination of silanes (SiH2) using acetyl chloride as the chlorine source is described. The reactions proceed under ambient conditions, although some compounds need to be protected from excess moisture. In addition, the mechanism and kinetics of the chlorination reaction are briefly adressed. In the second part of this thesis a versatile methodology for transformation of carbonyl compounds into three different compound classes by changing the conditions and amounts of reagents is discussed. One pot reductive benzylation, reductive halogenation and reductive etherification of ketones and aldehydes using silanes as the reducing agent, halide source or cocatalyst, were investigated. Also the reaction kinetics and mechanism of the reductive halogenation of acetophenone are briefly discussed.
Resumo:
The aim of this thesis was to create a process for all multi-site ramp-up (MSRU) projects in the case company in order to have simultaneous ramp-ups early in the market. The research was done through case study in one company and semi-structured interviews. There are already processes, which are now in use in MSRU-cases. Interviews of 20 ramp-up specialists revealed topics to be improved. Those were project team set up, roles and responsibilities and recommended project organization, communication, product change management practices, competence and know how transfer practices and support model. More R&D support and involvement is needed in MSRU-projects. DCM’s role is very important in the MSRU-projects among PMT-team; he should be the business owner of the project. Recommendation is that product programs could take care of the product and repair training of new products in volume factories. R&D’s participation in competence transfers is essential important in MSRU-projects. Communication in projects could be shared through special intranet commune. Blogging and tweeting could be considered in the communication plan. If hundreds of change notes are open in ramp-up phase, it should be considered not to approve the product into volume ramp-up. PMTs’ supports are also important and MSRU-projects should be planned, budgeted and executed together. Finally a new MSRU-process is presented in this thesis to be used in all MSRU-projects.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Perfluoratut alkyyliyhdisteet eli PFAS-yhdisteet ovat synteettisiä orgaanisia yhdisteitä, joissa on fluorattu hiiliketju. Hiilen ja fluorin väliset vahvat sidokset ovat muodostuneet ongelmaksi jätevedenpuhdistamoilla, sillä yhdisteet eivät hajoa puhdistamoilla käytössä olevilla vedenpuhdistusmenetelmillä. Yhdisteitä kertyy luontoon jätevesien mukana. Kandidaatintyössä on vertailtu yhdisteitä sisältävien vesien käsittelymenetelmiä parhaiten soveltuvan menetelmän löytämiseksi. Menetelmien kustannuksia tai soveltuvuutta vedenpuhdistamomittakaavan prosessiksi ei ole arvioitu. Lisäksi työssä on koottu yhdisteitä sisältävien jätevesien analysointiin sopivia analyysimenetelmiä. Soveltuvat puhdistus- ja analyysimenetelmät on esitelty uusien tieteellisten artikkelien pohjalta. Mahdollisia erotusmenetelmiä ovat membraanierotus ja sorptio. Membraaneista soveltuvimpia ovat nanosuodatus- ja käänteisosmoosimembraanit, joilla erottuvat jopa 0,0001 μm:n kokoiset partikkelit. PFAS-yhdisteet voidaan erottaa sorptiolla muun muassa aktiivihiileen. Yhdisteiden rakenne hajoaa nykyaikaisilla hapetusmenetelmillä ja polttamalla lietteen mukana. Hapettaminen permanganaatin avulla ei tuottanut hyvää tulosta, mutta fotokemiallisella hapetuksella ja alhaisen lämpötilan plasmatekniikalla (NTP) yhdisteiden rakenne hajosi lähes kokonaan. Fotokemiallinen hapetus onnistui erityisesti perfluorokarboksyylihapoilla, joiden rakenne hajosi jopa kolmessa tunnissa. Yleisimmin käytetty analyysimenetelmä on nestekromatografin ja massaspektrometrin yhdistelmä (LC-MS/MS) ja matriisivaikutus minimoidaan tyypillisesti kiinteäfaasiuutolla (SPE). Työssä esitellyistä käsittelymenetelmistä parhaiten soveltuva on NTP-menetelmä, koska sillä saatiin tutkimusten mukaan hajotettua yhdisteiden rakenne muita menetelmiä lyhyemmässä ajassa ja se soveltuu parhaiten kaikille PFAS-yhdisteille. NTP-menetelmässä ei tarvita katalyyttiä tai lisäkemikaaleja. Voimakkaana hapettimena toimivat epästabiilit hydroksyyliradikaalit, jotka syntyvät koronapurkauksen kautta. Koronapurkauksessa muodostuu myös otsonia ja lisäksi vapaa happi voi tehostaa hapettumista. Menetelmässä muodostuvien hajoamistuotteiden hallinta vaatii lisätutkimusta. Mahdollinen hallintakeino voisi olla esimerkiksi hapettumisessa vapautuvien fluoridi-ionien saostaminen. Muodostuvien hajoamistuotteiden toksisuutta voitaisiin tarkkailla biosensorilla.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
The production of biodiesel through transesterification has created a surplus of glycerol on the international market. In few years, glycerol has become an inexpensive and abundant raw material, subject to numerous plausible valorisation strategies. Glycerol hydrochlorination stands out as an economically attractive alternative to the production of biobased epichlorohydrin, an important raw material for the manufacturing of epoxy resins and plasticizers. Glycerol hydrochlorination using gaseous hydrogen chloride (HCl) was studied from a reaction engineering viewpoint. Firstly, a more general and rigorous kinetic model was derived based on a consistent reaction mechanism proposed in the literature. The model was validated with experimental data reported in the literature as well as with new data of our own. Semi-batch experiments were conducted in which the influence of the stirring speed, HCl partial pressure, catalyst concentration and temperature were thoroughly analysed and discussed. Acetic acid was used as a homogeneous catalyst for the experiments. For the first time, it was demonstrated that the liquid-phase volume undergoes a significant increase due to the accumulation of HCl in the liquid phase. Novel and relevant features concerning hydrochlorination kinetics, HCl solubility and mass transfer were investigated. An extended reaction mechanism was proposed and a new kinetic model was derived. The model was tested with the experimental data by means of regression analysis, in which kinetic and mass transfer parameters were successfully estimated. A dimensionless number, called Catalyst Modulus, was proposed as a tool for corroborating the kinetic model. Reactive flash distillation experiments were conducted to check the commonly accepted hypothesis that removal of water should enhance the glycerol hydrochlorination kinetics. The performance of the reactive flash distillation experiments were compared to the semi-batch data previously obtained. An unforeseen effect was observed once the water was let to be stripped out from the liquid phase, exposing a strong correlation between the HCl liquid uptake and the presence of water in the system. Water has revealed to play an important role also in the HCl dissociation: as water was removed, the dissociation of HCl was diminished, which had a retarding effect on the reaction kinetics. In order to obtain a further insight on the influence of water on the hydrochlorination reaction, extra semi-batch experiments were conducted in which initial amounts of water and the desired product were added. This study revealed the possibility to use the desired product as an ideal “solvent” for the glycerol hydrochlorination process. A co-current bubble column was used to investigate the glycerol hydrochlorination process under continuous operation. The influence of liquid flow rate, gas flow rate, temperature and catalyst concentration on the glycerol conversion and product distribution was studied. The fluid dynamics of the system showed a remarkable behaviour, which was carefully investigated and described. Highspeed camera images and residence time distribution experiments were conducted to collect relevant information about the flow conditions inside the tube. A model based on the axial dispersion concept was proposed and confronted with the experimental data. The kinetic and solubility parameters estimated from the semi-batch experiments were successfully used in the description of mass transfer and fluid dynamics of the bubble column reactor. In light of the results brought by the present work, the glycerol hydrochlorination reaction mechanism has been finally clarified. It has been demonstrated that the reactive distillation technology may cause drawbacks to the glycerol hydrochlorination reaction rate under certain conditions. Furthermore, continuous reactor technology showed a high selectivity towards monochlorohydrins, whilst semibatch technology was demonstrated to be more efficient towards the production of dichlorohydrins. Based on the novel and revealing discoveries brought by the present work, many insightful suggestions are made towards the improvement of the production of αγ-dichlorohydrin on an industrial scale.