945 resultados para Runs
Resumo:
Similar to other photosynthetic microorganisms, the cyanobacterium Arthrospira platensis can be used to produce pigments, single cell proteins, fatty acids (which can be used for bioenergy), food and feed supplements, and biofixation of CO2. Cultivation in a specifically designed tubular photobioreactor is suitable for photosynthetic biomass production, because the cultivation area can be reduced by distributing the microbial cells vertically, thus avoiding loss of ammonia and CO2. The aim of this study was to investigate the influence of light intensity and dilution rate on the photosynthetic efficiency and CO2 assimilation efficiency of A. platensis cultured in a tubular photobioreactor in a continuous process. Urea was used as a nitrogen source and CO2 as carbon source and for pH control. Steady-state conditions were achieved in most of the runs, indicating that continuous cultivation of this cyanobacterium in a tubular photobioreactor could be an interesting alternative for the large-scale fixation of CO2 to mitigate the greenhouse effect while producing high protein content biomass.
Resumo:
For fixed positive integers r, k and E with 1 <= l < r and an r-uniform hypergraph H, let kappa(H, k, l) denote the number of k-colorings of the set of hyperedges of H for which any two hyperedges in the same color class intersect in at least l elements. Consider the function KC(n, r, k, l) = max(H epsilon Hn) kappa(H, k, l), where the maximum runs over the family H-n of all r-uniform hypergraphs on n vertices. In this paper, we determine the asymptotic behavior of the function KC(n, r, k, l) for every fixed r, k and l and describe the extremal hypergraphs. This variant of a problem of Erdos and Rothschild, who considered edge colorings of graphs without a monochromatic triangle, is related to the Erdos-Ko-Rado Theorem (Erdos et al., 1961 [8]) on intersecting systems of sets. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A bare graphite-epoxy composite was evaluated as an electrode material in the determination of atenolol in natural water samples and pharmaceutical formulations for which the analyte was spiked. Using a DPV procedure, a linear response was observed in the 4.45-84.7 mu mol L-1 range with a LOD = 2.23 mu mol L-1, without need of surface renewal between successive runs, and recoveries between 92.5 and 107.5% for pharmaceutical formulations. The results obtained from the proposed procedure agreed with HPLC results within a 95% confidence level. During the determination of atenolol in water samples, recoveries between 96.1 and 102.6% were found.
Resumo:
A bare graphite-polyurethane composite was evaluated in the tetracycline (TC) determination in natural water samples. Using differential pulse voltammetry (DPV), a linear response was observed in the range of 4.00-40.0 µmol L-1 with limit of detection of 2.80 µmol L-1, without the need of surface renewing between successive runs. During the tetracycline determination in water samples, recoveries between 92.6 and 100% were found. The results for TC determination in water samples after a pre-concentration stage agreed with spiked value at a 95% confidence level according to student t-test.
Resumo:
Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.
Resumo:
This Ph.D. Thesis has been carried out in the framework of a long-term and large project devoted to describe the main photometric, chemical, evolutionary and integrated properties of a representative sample of Large and Small Magellanic Cloud (LMC and SMC respectively) clusters. The globular clusters system of these two Irregular galaxies provides a rich resource for investigating stellar and chemical evolution and to obtain a detailed view of the star formation history and chemical enrichment of the Clouds. The results discussed here are based on the analysis of high-resolution photometric and spectroscopic datasets obtained by using the last generation of imagers and spectrographs. The principal aims of this project are summarized as follows: • The study of the AGB and RGB sequences in a sample of MC clusters, through the analysis of a wide near-infrared photometric database, including 33 Magellanic globulars obtained in three observing runs with the near-infrared camera SOFI@NTT (ESO, La Silla). • The study of the chemical properties of a sample of MCs clusters, by using optical and near-infrared high-resolution spectra. 3 observing runs have been secured to our group to observe 9 LMC clusters (with ages between 100 Myr and 13 Gyr) with the optical high-resolution spectrograph FLAMES@VLT (ESO, Paranal) and 4 very young (<30 Myr) clusters (3 in the LMC and 1 in the SMC) with the near-infrared high-resolution spectrograph CRIRES@VLT. • The study of the photometric properties of the main evolutive sequences in optical Color- Magnitude Diagrams (CMD) obtained by using HST archive data, with the final aim of dating several clusters via the comparison between the observed CMDs and theoretical isochrones. The determination of the age of a stellar population requires an accurate measure of the Main Sequence (MS) Turn-Off (TO) luminosity and the knowledge of the distance modulus, reddening and overall metallicity. For this purpose, we limited the study of the age just to the clusters already observed with high-resolution spectroscopy, in order to date only clusters with accurate estimates of the overall metallicity.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
[EN]This paper describes a low-cost system that allows the user to visualize different glasses models in live video. The user can also move the glasses to adjust its position on the face. The system, which runs at 9.5 frames/s on general-purpose hardware, has a homeostatic module that keeps image parameters controlled. This is achieved by using a camera with motorized zoom, iris, white balance, etc. This feature can be specially useful in environments with changing illumination and shadows, like in an optical shop. The system also includes a face and eye detection module and a glasses management module.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
[EN ]The classical optimal (in the Frobenius sense) diagonal preconditioner for large sparse linear systems Ax = b is generalized and improved. The new proposed approximate inverse preconditioner N is based on the minimization of the Frobenius norm of the residual matrix AM − I, where M runs over a certain linear subspace of n × n real matrices, defined by a prescribed sparsity pattern. The number of nonzero entries of the n×n preconditioning matrix N is less than or equal to 2n, and n of them are selected as the optimal positions in each of the n columns of matrix N. All theoretical results are justified in detail…
Resumo:
Salt deposits characterize the subsurface of Tuzla (BiH) and made it famous since the ancient times. Archeological discoveries demonstrate the presence of a Neolithic pile-dwelling settlement related to the existence of saltwater springs that contributed to make the most of the area a swampy ground. Since the Roman times, the town is reported as “the City of Salt deposits and Springs”; "tuz" is the Turkish word for salt, as the Ottomans renamed the settlement in the 15th century following their conquest of the medieval Bosnia (Donia and Fine, 1994). Natural brine springs were located everywhere and salt has been evaporated by means of hot charcoals since pre-Roman times. The ancient use of salt was just a small exploitation compared to the massive salt production carried out during the 20th century by means of classical mine methodologies and especially wild brine pumping. In the past salt extraction was practised tapping natural brine springs, while the modern technique consists in about 100 boreholes with pumps tapped to the natural underground brine runs, at an average depth of 400-500 m. The mining operation changed the hydrogeological conditions enabling the downward flow of fresh water causing additional salt dissolution. This process induced severe ground subsidence during the last 60 years reaching up to 10 meters of sinking in the most affected area. Stress and strain of the overlying rocks induced the formation of numerous fractures over a conspicuous area (3 Km2). Consequently serious damages occurred to buildings and infrastructures such as water supply system, sewage networks and power lines. Downtown urban life was compromised by the destruction of more than 2000 buildings that collapsed or needed to be demolished causing the resettlement of about 15000 inhabitants (Tatić, 1979). Recently salt extraction activities have been strongly reduced, but the underground water system is returning to his natural conditions, threatening the flooding of the most collapsed area. During the last 60 years local government developed a monitoring system of the phenomenon, collecting several data about geodetic measurements, amount of brine pumped, piezometry, lithostratigraphy, extension of the salt body and geotechnical parameters. A database was created within a scientific cooperation between the municipality of Tuzla and the city of Rotterdam (D.O.O. Mining Institute Tuzla, 2000). The scientific investigation presented in this dissertation has been financially supported by a cooperation project between the Municipality of Tuzla, The University of Bologna (CIRSA) and the Province of Ravenna. The University of Tuzla (RGGF) gave an important scientific support in particular about the geological and hydrogeological features. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas (Gutierrez et al., 2008). The subject of this study is the collapsing phenomenon occurring in Tuzla area with the aim to identify and quantify the several factors involved in the system and their correlations. Tuzla subsidence phenomenon can be defined as geohazard, which represents the consequence of an adverse combination of geological processes and ground conditions precipitated by human activity with the potential to cause harm (Rosenbaum and Culshaw, 2003). Where an hazard induces a risk to a vulnerable element, a risk management process is required. The single factors involved in the subsidence of Tuzla can be considered as hazards. The final objective of this dissertation represents a preliminary risk assessment procedure and guidelines, developed in order to quantify the buildings vulnerability in relation to the overall geohazard that affect the town. The historical available database, never fully processed, have been analyzed by means of geographic information systems and mathematical interpolators (PART I). Modern geomatic applications have been implemented to deeply investigate the most relevant hazards (PART II). In order to monitor and quantify the actual subsidence rates, geodetic GPS technologies have been implemented and 4 survey campaigns have been carried out once a year. Subsidence related fractures system has been identified by means of field surveys and mathematical interpretations of the sinking surface, called curvature analysis. The comparison of mapped and predicted fractures leaded to a better comprehension of the problem. Results confirmed the reliability of fractures identification using curvature analysis applied to sinking data instead of topographic or seismic data. Urban changes evolution has been reconstructed analyzing topographic maps and satellite imageries, identifying the most damaged areas. This part of the investigation was very important for the quantification of buildings vulnerability.
Sviluppo di biosensori: modifiche di superfici elettrodiche e sistemi di immobilizzazione enzimatica
Resumo:
An amperometric glucose biosensor was developed using an anionic clay matrix (LDH) as enzyme support. The enzyme glucose oxidase (GOx) was immobilized on a layered double hydroxide Ni/Al-NO3 LDH during the electrosynthesis, which was followed by crosslinking with glutaraldehyde (GA) vapours or with GA and bovine serum albumin (GABSA) to avoid the enzyme release. The electrochemical reaction was carried out potentiostatically, at -0.9V vs. SCE, using a rotating disc Pt electrode to assure homogeneity of the electrodeposition suspension, containing GOx, Ni(NO3)2 and Al(NO3)3 in 0.3 M KNO3. The mechanism responsible of the LDH electrodeposition involves the precipitation of the LDH due to the increase of pH at the surface of the electrode, following the cathodic reduction of nitrates. The Pt surface modified with the Ni/Al-NO3 LDH shows a much reduced noise, giving rise to a better signal to noise ratio for the currents relative to H2O2 oxidation, and a linear range for H2O2 determination wider than the one observed for bare Pt electrodes. We pointed out the performances of the biosensor in terms of sensitivity to glucose, calculated from the slope of the linear part of the calibration curve for enzimatically produced H2O2; the sensitivity was dependent on parameters related to the electrodeposition in addition to working conditions. In order to optimise the glucose biosensor performances, with a reduced number of experimental runs, we applied an experimental design. A first screening was performed considering the following variables: deposition time (30 - 120 s), enzyme concentration (0.5 - 3.0 mg/mL), Ni/Al molar ratio (3:1 or 2:1) of the electrodeposition solution at a total metals concentration of 0.03 M and pH of the working buffer solution (5.5-7.0). On the basis of the results from this screening, a full factorial design was carried out, taking into account only enzyme concentration and Ni/Al molar ratio of the electrosynthesis solution. A full factorial design was performed to study linear interactions between factors and their quadratic effects and the optimal setup was evaluated by the isoresponse curves. The significant factors were: enzyme concentration (linear and quadratic terms) and the interaction between enzyme concentration and Ni/Al molar ratio. Since the major obstacle for application of amperometric glucose biosensors is the interference signal resulting from other electro-oxidizable species present in the real matrices, such as ascorbate (AA), the use of different permselective membranes on Pt-LDHGOx modified electrode was discussed with the aim of improving biosensor selectivity and stability. Conventional membranes obtained using Nafion, glutaraldehyde (GA) vapours, GA-BSA were tested together with more innovative materials like palladium hexacyanoferrate (PdHCF) and titania hydrogels. Particular attention has been devoted to hydrogels, because they possess some attractive features, which are generally considered to favour biosensor materials biocompatibility and, consequently, the functional enzyme stability. The Pt-LDH-GOx-PdHCF hydrogel biosensor presented an anti-interferant ability so that to be applied for an accurate glucose analysis in blood. To further improve the biosensor selectivity, protective membranes containing horseradish peroxidase (HRP) were also investigated with the aim of oxidising the interferants before they reach the electrode surface. In such a case glucose determination was also accomplished in real matrices with high AA content. Furthermore, the application of a LDH containing nickel in the oxidised state was performed not only as a support for the enzyme, but also as anti-interferant sistem. The result is very promising and it could be the starting point for further applications in the field of amperometric biosensors; the study could be extended to other oxidase enzymes.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
In Experimenten an lasergekühlten, in einer linearen Paulfalle gespeicherten $Ca^+$-Ionen wurde dieLebensdauer des metastabilen $3D_{5/2}$-Niveaus durch Beobachtung von Quantensprüngen einzelner Ionen zu 1100(18)msbestimmt. Systematische Fehler durch quenchende Stöße oder Stark-Mischen durch das Speicherfeld liegen unterhalb dererreichten Genauigkeit. Abweichungen von früheren Messungen konnten durch eine vernachlässigte Abhängigkeit derLebensdauer von der Laserleistung des Rückpumplasers erklärt werden. Das Endergebnis zeigt gute Übereinstimmung mitneueren theoretischen Werten. In weiteren Messungen an zehn Ionen wurde in einigen Messreihen eine deutliche Reduktionder Lebensdauer gegenüber einem einzelnen Ion festgestellt. Dabei wurden mehr koinzidente Zerfälle von zwei und dreiIonen beobachtet als für unabhängige Teilchen zu erwarten. In einem Ionenkristall wurde eine räumliche Trennung atomarer Zustände erreicht. Dabei wurde ein Teil der Ionen einesKristalls aus einigen hundert Ionen in den metastabilen Zustand gepumpt, der von den Kühllasern vollständig entkoppeltist. Durch sympathetische Kühlung werden diese Ionen weiterhin gekühlt und der Kristall schmilzt nicht. Durch denLichtdruck, den die Kühllaser ausgeüben, werden die Ionen nach atomaren Zuständen sortiert, weil die lasergekühltenIonen einen Rückstoß erfahren, die übrigen aber nicht. Für zukünftige Experimente wurden Verbesserungen des experimentellen Aufbaus auf den Weg gebracht. So wurden Methodenund Komponenten für eine verbesserte Frequenzstabilisierung der Diodenlaser entwickelt.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit der Synthese und Charakterisierung von porösen Kieselgelen und ihrem Einsatz als Träger in der heterogenen metallocen-katalysierten Polymerisation von Ethylen. Im Vordergrund stand die Optimierung dieses Prozesses durch das Maßscheidern der Trägereigenschaften unter sonst identischen Polymerisationsbedingungen und das Erforschen des heterogenen Polymerisationsprozesses. Das verwendete Katalysatorsystem (Methylaluminoxan mit Dicyclopentadienylzirkoniumdichlorid) besitzt sehr hohe Aktivitäten und verbleibt im Falle der heterogenen Reaktionsführung im Produkt. Der Mechanismus verläuft über mehrere Phasen, wobei besonderes Augenmerk auf die Trägerpartikelfragmentierung gelenkt wurde. Es wurden zwei Synthesekonzepte für die Herstellung der Träger verfolgt. Im ersten Teil der Arbeit wurden monodisperse unporöse Kieselgel-Nanopartikel (Monosphere) zu Agglomeratträgern über einen Sprühtrocknungsprozess aufgebaut. Die Stabilität der Agglomerate wurde über den Zusatz von monodispersen Kieselgel-Binderpartikeln während der Herstellung variiert. Es wurden sowohl die porenstrukturellen als auch morphologischen Eigenschaften der Agglomeratprodukte untersucht und mit den physiko-chemischen Eigenschaften der Nanopartikel korreliert. In einem zweiten Ansatz wurden sphärische hochporöse Kieselgele mit abgestufter Porosität bei konstanter spezifischer Oberfläche hergestellt und als Träger in der Polyethylensynthese getestet.