13 resultados para Condition-based maintenance

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of safe, high energy and power electrochemical energy-conversion systems can be a response to the worldwide demand for a clean and low-fuel-consuming transport. This thesis work, starting from a basic studies on the ionic liquid (IL) electrolytes and carbon electrodes and concluding with tests on large-size IL-based supercapacitor prototypes demonstrated that the IL-based asymmetric configuration (AEDLCs) is a powerful strategy to develop safe, high-energy supercapacitors that might compete with lithium-ion batteries in power assist-hybrid electric vehicles (HEVs). The increase of specific energy in EDLCs was achieved following three routes: i) the use of hydrophobic ionic liquids (ILs) as electrolytes; ii) the design and preparation of carbon electrode materials of tailored morphology and surface chemistry to feature high capacitance response in IL and iii) the asymmetric double-layer carbon supercapacitor configuration (AEDLC) which consists of assembling the supercapacitor with different carbon loadings at the two electrodes in order to exploit the wide electrochemical stability window (ESW) of IL and to reach high maximum cell voltage (Vmax). Among the various ILs investigated the N-methoxyethyl-N-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (PYR1(2O1)TFSI) was selected because of its hydrophobicity and high thermal stability up to 350 °C together with good conductivity and wide ESW, exploitable in a wide temperature range, below 0°C. For such exceptional properties PYR1(2O1)TFSI was used for the whole study to develop large size IL-based carbon supercapacitor prototype. This work also highlights that the use of ILs determines different chemical-physical properties at the interface electrode/electrolyte with respect to that formed by conventional electrolytes. Indeed, the absence of solvent in ILs makes the properties of the interface not mediated by the solvent and, thus, the dielectric constant and double-layer thickness strictly depend on the chemistry of the IL ions. The study of carbon electrode materials evidences several factors that have to be taken into account for designing performing carbon electrodes in IL. The heat-treatment in inert atmosphere of the activated carbon AC which gave ACT carbon featuring ca. 100 F/g in IL demonstrated the importance of surface chemistry in the capacitive response of the carbons in hydrophobic ILs. The tailored mesoporosity of the xerogel carbons is a key parameter to achieve high capacitance response. The CO2-treated xerogel carbon X3a featured a high specific capacitance of 120 F/g in PYR14TFSI, however, exhibiting high pore volume, an excess of IL is required to fill the pores with respect to that necessary for the charge-discharge process. Further advances were achieved with electrodes based on the disordered template carbon DTC7 with pore size distribution centred at 2.7 nm which featured a notably high specific capacitance of 140 F/g in PYR14TFSI and a moderate pore volume, V>1.5 nm of 0.70 cm3/g. This thesis work demonstrated that by means of the asymmetric configuration (AEDLC) it was possible to reach high cell voltage up to 3.9 V. Indeed, IL-based AEDLCs with the X3a or ACT carbon electrodes exhibited specific energy and power of ca. 30 Wh/kg and 10 kW/kg, respectively. The DTC7 carbon electrodes, featuring a capacitance response higher of 20%-40% than those of X3a and ACT, respectively, enabled the development of a PYR14TFSI-based AEDLC with specific energy and power of 47 Wh/kg and 13 kW/kg at 60°C with Vmax of 3.9 V. Given the availability of the ACT carbon (obtained from a commercial material), the PYR1(2O1)TFSI-based AEDLCs assembled with ACT carbon electrodes were selected within the EU ILHYPOS project for the development of large-size prototypes. This study demonstrated that PYR1(2O1)TFSI-based AEDLC can operate between -30°C and +60°C and its cycling stability was proved at 60°C up to 27,000 cycles with high Vmax up to 3.8 V. Such AEDLC was further investigated following USABC and DOE FreedomCAR reference protocols for HEV to evaluate its dynamic pulse-power and energy features. It was demonstrated that with Vmax of 3.7 V at T> 30 °C the challenging energy and power targets stated by DOE for power-assist HEVs, and at T> 0 °C the standards for the 12V-TSS and 42V-FSS and TPA 2s-pulse applications are satisfied, if the ratio wmodule/wSC = 2 is accomplished, which, however, is a very demanding condition. Finally, suggestions for further advances in IL-based AEDLC performance were found. Particularly, given that the main contribution to the ESR is the electrode charging resistance, which in turn is affected by the ionic resistance in the pores that is also modulated by pore length, the pore geometry is a key parameter in carbon design not only because it defines the carbon surface but also because it can differentially “amplify” the effect of IL conductivity on the electrode charging-discharging process and, thus, supercapacitor time constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis was to describe the development of motion analysis protocols for applications on upper and lower limb extremities, by using inertial sensors-based systems. Inertial sensors-based systems are relatively recent. Knowledge and development of methods and algorithms for the use of such systems for clinical purposes is therefore limited if compared with stereophotogrammetry. However, their advantages in terms of low cost, portability, small size, are a valid reason to follow this direction. When developing motion analysis protocols based on inertial sensors, attention must be given to several aspects, like the accuracy of inertial sensors-based systems and their reliability. The need to develop specific algorithms/methods and software for using these systems for specific applications, is as much important as the development of motion analysis protocols based on them. For this reason, the goal of the 3-years research project described in this thesis was achieved first of all trying to correctly design the protocols based on inertial sensors, in terms of exploring and developing which features were suitable for the specific application of the protocols. The use of optoelectronic systems was necessary because they provided a gold standard and accurate measurement, which was used as a reference for the validation of the protocols based on inertial sensors. The protocols described in this thesis can be particularly helpful for rehabilitation centers in which the high cost of instrumentation or the limited working areas do not allow the use of stereophotogrammetry. Moreover, many applications requiring upper and lower limb motion analysis to be performed outside the laboratories will benefit from these protocols, for example performing gait analysis along the corridors. Out of the buildings, the condition of steady-state walking or the behavior of the prosthetic devices when encountering slopes or obstacles during walking can also be assessed. The application of inertial sensors on lower limb amputees presents conditions which are challenging for magnetometer-based systems, due to ferromagnetic material commonly adopted for the construction of idraulic components or motors. INAIL Prostheses Centre stimulated and, together with Xsens Technologies B.V. supported the development of additional methods for improving the accuracy of MTx in measuring the 3D kinematics for lower limb prostheses, with the results provided in this thesis. In the author’s opinion, this thesis and the motion analysis protocols based on inertial sensors here described, are a demonstration of how a strict collaboration between the industry, the clinical centers, the research laboratories, can improve the knowledge, exchange know-how, with the common goal to develop new application-oriented systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inflammatory Bowel Diseases (IBD) are intestinal chronic relapsing diseases which ethiopathogenesis remains uncertain. Several group have attempted to study the role of factors involved such as genetic susceptibility, environmental factors such as smoke, diet, sex, immunological factors as well as the microbioma. None of the treatments available satisfy several criteria at the same time such as safety, long-term remission, histopatological healing, and specificity. We used two different approaches for the development of new therapeutic treatment for Inflammatory Bowel Disease. The first is focused on the understanding of the potential role of functional food and nutraceuticals nutrients in the treatment of IBD. To do so, we investigated the role of Curcuma longa in the treatment of chemical induced colitis in mice model. Since Curcma Longa has been investigated for its antinflammatory role related to the TNFα pathway as well investigators have reported few cases of patients with ulcerative colites treated with this herbs, we harbored the hypothesis of a role of Curcuma Longa in the treatment f IBD as well as we decided to assess its role in intestinal motility. The second part is based on an immunological approach to develop new drugs to induce suppression in Crohn’s disease or to induce mucosa immunity such as in colonrectal tumor. The main idea behind this approach is that we could manipulate relevant cell-cell interactions using synthetic peptides. We demonstrated the role of the unique interaction between molecules expressed on intestinal epithelial cells such as CD1d and CEACAM5 and on CD8+ T cells. In normal condition this interaction has a role for the expansion of the suppressor CD8+ T cells. Here, we characterized this interaction, we defined which are the epitope involved in the binding and we attempted to develop synthetic peptides from the N domain of CEACAM5 in order to manipulate it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout the alpine domain, shallow landslides represent a serious geologic hazard, often causing severe damages to infrastructures, private properties, natural resources and in the most catastrophic events, threatening human lives. Landslides are a major factor of landscape evolution in mountainous and hilly regions and represent a critical issue for mountainous land management, since they cause loss of pastoral lands. In several alpine contexts, shallow landsliding distribution is strictly connected to the presence and condition of vegetation on the slopes. With the aid of high-resolution satellite images, it's possible to divide automatically the mountainous territory in land cover classes, which contribute with different magnitude to the stability of the slopes. The aim of this research is to combine EO (Earth Observation) land cover maps with ground-based measurements of the land cover properties. In order to achieve this goal, a new procedure has been developed to automatically detect grass mantle degradation patterns from satellite images. Moreover, innovative surveying techniques and instruments are tested to measure in situ the shear strength of grass mantle and the geomechanical and geotechnical properties of these alpine soils. Shallow landsliding distribution is assessed with the aid of physically based models, which use the EO-based map to distribute the resistance parameters across the landscape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis is focused on the study of innovative Si-based materials for third generation photovoltaics. In particular, silicon oxi-nitride (SiOxNy) thin films and multilayer of Silicon Rich Carbide (SRC)/Si have been characterized in view of their application in photovoltaics. SiOxNy is a promising material for applications in thin-film solar cells as well as for wafer based silicon solar cells, like silicon heterojunction solar cells. However, many issues relevant to the material properties have not been studied yet, such as the role of the deposition condition and precursor gas concentrations on the optical and electronic properties of the films, the composition and structure of the nanocrystals. The results presented in the thesis aim to clarify the effects of annealing and oxygen incorporation within nc-SiOxNy films on its properties in view of the photovoltaic applications. Silicon nano-crystals (Si NCs) embedded in a dielectric matrix were proposed as absorbers in all-Si multi-junction solar cells due to the quantum confinement capability of Si NCs, that allows a better match to the solar spectrum thanks to the size induced tunability of the band gap. Despite the efficient solar radiation absorption capability of this structure, its charge collection and transport properties has still to be fully demonstrated. The results presented in the thesis aim to the understanding of the transport mechanisms at macroscopic and microscopic scale. Experimental results on SiOxNy thin films and SRC/Si multilayers have been obtained at macroscopical and microscopical level using different characterizations techniques, such as Atomic Force Microscopy, Reflection and Transmission measurements, High Resolution Transmission Electron Microscopy, Energy-Dispersive X-ray spectroscopy and Fourier Transform Infrared Spectroscopy. The deep knowledge and improved understanding of the basic physical properties of these quite complex, multi-phase and multi-component systems, made by nanocrystals and amorphous phases, will contribute to improve the efficiency of Si based solar cells.