29 resultados para Tourist literature analysis

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main reasons for the attention focused on ceramics as possible structural materials are their wear resistance and the ability to operate with limited oxidation and ablation at temperatures above 2000°C. Hence, this work is devoted to the study of two classes of materials which can satisfy these requirements: silicon carbide -based ceramics (SiC) for wear applications and borides and carbides of transition metals for ultra-high temperatures applications (UHTCs). SiC-based materials: Silicon carbide is a hard ceramic, which finds applications in many industrial sectors, from heat production, to automotive engineering and metals processing. In view of new fields of uses, SiC-based ceramics were produced with addition of 10-30 vol% of MoSi2, in order to obtain electro conductive ceramics. MoSi2, indeed, is an intermetallic compound which possesses high temperature oxidation resistance, high electrical conductivity (21·10-6 Ω·cm), relatively low density (6.31 g/cm3), high melting point (2030°C) and high stiffness (440 GPa). The SiC-based ceramics were hot pressed at 1900°C with addition of Al2O3-Y2O3 or Y2O3-AlN as sintering additives. The microstructure of the composites and of the reference materials, SiC and MoSi2, were studied by means of conventional analytical techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (SEM-EDS). The composites showed a homogeneous microstructure, with good dispersion of the secondary phases and low residual porosity. The following thermo-mechanical properties of the SiC-based materials were measured: Vickers hardness (HV), Young’s modulus (E), fracture toughness (KIc) and room to high temperature flexural strength (σ). The mechanical properties of the composites were compared to those of two monolithic SiC and MoSi2 materials and resulted in a higher stiffness, fracture toughness and slightly higher flexural resistance. Tribological tests were also performed in two configurations disco-on-pin and slideron cylinder, aiming at studying the wear behaviour of SiC-MoSi2 composites with Al2O3 as counterfacing materials. The tests pointed out that the addition of MoSi2 was detrimental owing to a lower hardness in comparison with the pure SiC matrix. On the contrary, electrical measurements revealed that the addition of 30 vol% of MoSi2, rendered the composite electroconductive, lowering the electrical resistance of three orders of magnitude. Ultra High Temperature Ceramics: Carbides, borides and nitrides of transition metals (Ti, Zr, Hf, Ta, Nb, Mo) possess very high melting points and interesting engineering properties, such as high hardness (20-25 GPa), high stiffness (400-500 GPa), flexural strengths which remain unaltered from room temperature to 1500°C and excellent corrosion resistance in aggressive environment. All these properties place the UHTCs as potential candidates for the development of manoeuvrable hypersonic flight vehicles with sharp leading edges. To this scope Zr- and Hf- carbide and boride materials were produced with addition of 5-20 vol% of MoSi2. This secondary phase enabled the achievement of full dense composites at temperature lower than 2000°C and without the application of pressure. Besides the conventional microstructure analyses XRD and SEM-EDS, transmission electron microscopy (TEM) was employed to explore the microstructure on a small length scale to disclose the effective densification mechanisms. A thorough literature analysis revealed that neither detailed TEM work nor reports on densification mechanisms are available for this class of materials, which however are essential to optimize the sintering aids utilized and the processing parameters applied. Microstructural analyses, along with thermodynamics and crystallographic considerations, led to disclose of the effective role of MoSi2 during sintering of Zrand Hf- carbides and borides. Among the investigated mechanical properties (HV, E, KIc, σ from room temperature to 1500°C), the high temperature flexural strength was improved due to the protective and sealing effect of a silica-based glassy phase, especially for the borides. Nanoindentation tests were also performed on HfC-MoSi2 composites in order to extract hardness and elastic modulus of the single phases. Finally, arc jet tests on HfC- and HfB2-based composites confirmed the excellent oxidation behaviour of these materials under temperature exceeding 2000°C; no cracking or spallation occurred and the modified layer was only 80-90 μm thick.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PROBLEM In the last few years farm tourism or agritourism as it is also referred to has enjoyed increasing success because of its generally acknowledged role as a promoter of economic and social development of rural areas. As a consequence, a plethora of studies have been dedicated to this tourist sector, focusing on a variety of issues. Nevertheless, despite the difficulties of many farmers to orient their business towards potential customers, the contribution of the marketing literature has been moderate. PURPOSE This dissertation builds upon studies which advocate the necessity of farm tourism to innovate itself according to the increasingly demanding needs of customers. Henceforth, the purpose of this dissertation is to critically evaluate the level of professionalism reached in the farm tourism market within a marketing approach. METHODOLOGY This dissertation is a cross-country perspective incorporating the marketing of farm tourism studied in Germany and Italy. Hence, the marketing channels of this tourist sector are examined both from the supply and the demand side by means of five exploratory studies. The data collection has been conducted in the timeframe of 2006 to 2009 in manifold ways (online survey, catalogues of industry associations, face-to-face interviews, etc.) according to the purpose of the research of each study project. The data have been analyzed using multivariate statistical analysis. FINDINGS A comprehensive literature review provides the state of the art of the main differences and similarities of farm tourism in the two countries of study. The main findings contained in the empirical chapters provide insights on many aspects of agritourism including how the expectations of farm operators and customers differ, which development scenarios of farm tourism are more likely to meet individuals’ needs, how new technologies can impact the demand for farm tourism, etc. ORIGINALITY/VALUE The value of this study is in the investigation of the process by which farmers’ participation in the development of this sector intersects with consumer consumption patterns. Focusing on this process should allow farm operators and others including related businesses to more efficiently allocate resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The project was developed into three parts: the analysis of p63 isoform in breast tumours; the study of intra-tumour eterogeneicity in metaplastic breast carcinoma; the analysis of oncocytic breast carcinoma. p63 is a sequence-specific DNA-binding factor, homologue of the tumour suppressor and transcription factor p53. The human p63 gene is composed of 15 exons and transcription can occur from two distinct promoters: the transactivating isoforms (TAp63) are generated by a promoter upstream of exon 1, while the alternative promoter located in intron 3 leads to the expression of N-terminal truncated isoforms (ΔNp63). It has been demonstrated that anti-p63 antibodies decorate the majority of squamous cell carcinomas of different organs; moreover tumours with myoepithelial differentiation of the breast show nuclear p63 expression. Two new isoforms have been described with the same sequence as TAp63 and ΔNp63 but lacking exon 4: d4TAp63 and ΔNp73L, respectively. Purpose of the study was to investigate the molecular expression of N-terminal p63 isoforms in benign and malignant breast tissues. In the present study 40 specimens from normal breast, benign lesions, DIN/DCIS, and invasive carcinomas were analyzed by immunohistochemistry and RT-PCR (Reverse Transcriptase-PCR) in order to disclose the patterns of p63 expression. We have observed that the full-length isoforms can be detected in non neoplastic and neoplastic lesions, while the short isoforms are only present in the neoplastic cells of invasive carcinomas. Metaplastic carcinomas of the breast are a heterogeneous group of neoplasms which exhibit varied patterns of metaplasia and differentiation. The existence of such non-modal populations harbouring distinct genetic aberrations may explain the phenotypic diversity observed within a given tumour. Intra-tumour morphological heterogeneity is not uncommon in breast cancer and it can often be appreciated in metaplastic breast carcinomas. Aim of this study was to determine the existence of intra-tumour genetic heterogeneity in metaplastic breast cancers and whether areas with distinct morphological features in a given tumour might be underpinned by distinct patterns of genetic aberrations. 47 cases of metaplastic breast carcinomas were retrieved. Out of the 47 cases, 9 had areas that were of sufficient dimensions to be independently microdissected. Our results indicate that at least some breast cancers are composed of multiple non-modal populations of clonally related cells and provide direct evidence that at least some types of metaplastic breast cancers are composed of multiple non-modal clones harbouring distinct genetic aberrations. Oncocytic tumours represent a distinctive set of lesions with typical granular cytoplasmatic eosinophilia of the neoplastic cells. Only rare example of breast oncocytic carcinomas have been reported in literature and the incidence is probably underestimated. In this study we have analysed 33 cases of oncocytic invasive breast carcinoma of the breast, selected according to morphological and immunohistochemical criteria. These tumours were morphologically classified and studied by immunohistochemistry and aCGH. We have concluded that oncocytic breast carcinoma is a morphologic entity with distinctive ultrastructural and histological features; immunohistochemically is characterized by a luminal profile, it has a frequency of 19.8%, has not distinctive clinical features and, at molecular level, shows a specific constellation of genetic aberration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is the study of the nuclear properties of radio loud AGN. Multiple and/or recent mergers in the host galaxy and/or the presence of cool core in galaxy clusters can play a role in the formation and evolution of the radio source. Being a unique class of objects (Lin & Mohr 2004), we focus on Brightest Cluster Galaxies (BCGs). We investigate their parsec scale radio emission with VLBI (Very Long Baseline Interferometer) observations. From literature or new data , we collect and analyse VLBA (Very Long Baseline) observations at 5 GHz of a complete sample of BCGs and ``normal'' radio galaxies (Bologna Complete Sample , BCS). Results on nuclear properties of BCGs are coming from the comparison with the results for the Bologna COmplete Sample (BCS). Our analysis finds a possible dichotomy between BCGs in cool-core clusters and those in non-cool-core clusters. Only one-sided BCGs have similar kinematic properties with FRIs. Furthermore, the dominance of two-sided jet structures only in cooling clusters suggests sub-relativistic jet velocities. The different jet properties can be related to a different jet origin or to the interaction with a different ISM. We larger discuss on possible explanation of this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the growing attention of consumers towards their food, improvement of quality of animal products has become one of the main focus of research. To this aim, the application of modern molecular genetics approaches has been proved extremely useful and effective. This innovative drive includes all livestock species productions, including pork. The Italian pig breeding industry is unique because needs heavy pigs slaughtered at about 160 kg for the production of high quality processed products. For this reason, it requires precise meat quality and carcass characteristics. Two aspects have been considered in this thesis: the application of the transcriptome analysis in post mortem pig muscles as a possible method to evaluate meat quality parameters related to the pre mortem status of the animals, including health, nutrition, welfare, and with potential applications for product traceability (chapters 3 and 4); the study of candidate genes for obesity related traits in order to identify markers associated with fatness in pigs that could be applied to improve carcass quality (chapters 5, 6, and 7). Chapter three addresses the first issue from a methodological point of view. When we considered this issue, it was not obvious that post mortem skeletal muscle could be useful for transcriptomic analysis. Therefore we demonstrated that the quality of RNA extracted from skeletal muscle of pigs sampled at different post mortem intervals (20 minutes, 2 hours, 6 hours, and 24 hours) is good for downstream applications. Degradation occurred starting from 48 h post mortem even if at this time it is still possible to use some RNA products. In the fourth chapter, in order to demonstrate the potential use of RNA obtained up to 24 hours post mortem, we present the results of RNA analysis with the Affymetrix microarray platform that made it possible to assess the level of expression of more of 24000 mRNAs. We did not identify any significant differences between the different post mortem times suggesting that this technique could be applied to retrieve information coming from the transcriptome of skeletal muscle samples not collected just after slaughtering. This study represents the first contribution of this kind applied to pork. In the fifth chapter, we investigated as candidate for fat deposition the TBC1D1 [TBC1 (tre-2/USP6, BUB2, cdc16) gene. This gene is involved in mechanisms regulating energy homeostasis in skeletal muscle and is associated with predisposition to obesity in humans. By resequencing a fragment of the TBC1D1 gene we identified three synonymous mutations localized in exon 2 (g.40A>G, g.151C>T, and g.172T>C) and 2 polymorphisms localized in intron 2 (g.219G>A and g.252G>A). One of these polymorphisms (g.219G>A) was genotyped by high resolution melting (HRM) analysis and PCR-RFLP. Moreover, this gene sequence was mapped by radiation hybrid analysis on porcine chromosome 8. The association study was conducted in 756 performance tested pigs of Italian Large White and Italian Duroc breeds. Significant results were obtained for lean meat content, back fat thickness, visible intermuscular fat and ham weight. In chapter six, a second candidate gene (tribbles homolog 3, TRIB3) is analyzed in a study of association with carcass and meat quality traits. The TRIB3 gene is involved in energy metabolism of skeletal muscle and plays a role as suppressor of adipocyte differentiation. We identified two polymorphisms in the first coding exon of the porcine TRIB3 gene, one is a synonymous SNP (c.132T> C), a second is a missense mutation (c.146C> T, p.P49L). The two polymorphisms appear to be in complete linkage disequilibrium between and within breeds. The in silico analysis of the p.P49L substitution suggests that it might have a functional effect. The association study in about 650 pigs indicates that this marker is associated with back fat thickness in Italian Large White and Italian Duroc breeds in two different experimental designs. This polymorphisms is also associated with lactate content of muscle semimembranosus in Italian Large White pigs. Expression analysis indicated that this gene is transcribed in skeletal muscle and adipose tissue as well as in other tissues. In the seventh chapter, we reported the genotyping results for of 677 SNPs in extreme divergent groups of pigs chosen according to the extreme estimated breeding values for back fat thickness. SNPs were identified by resequencing, literature mining and in silico database mining. analysis, data reported in the literature of 60 candidates genes for obesity. Genotyping was carried out using the GoldenGate (Illumina) platform. Of the analyzed SNPs more that 300 were polymorphic in the genotyped population and had minor allele frequency (MAF) >0.05. Of these SNPs, 65 were associated (P<0.10) with back fat thickness. One of the most significant gene marker was the same TBC1D1 SNPs reported in chapter 5, confirming the role of this gene in fat deposition in pig. These results could be important to better define the pig as a model for human obesity other than for marker assisted selection to improve carcass characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research is aimed at contributing to the identification of reliable fully predictive Computational Fluid Dynamics (CFD) methods for the numerical simulation of equipment typically adopted in the chemical and process industries. The apparatuses selected for the investigation, specifically membrane modules, stirred vessels and fluidized beds, were characterized by a different and often complex fluid dynamic behaviour and in some cases the momentum transfer phenomena were coupled with mass transfer or multiphase interactions. Firs of all, a novel modelling approach based on CFD for the prediction of the gas separation process in membrane modules for hydrogen purification is developed. The reliability of the gas velocity field calculated numerically is assessed by comparison of the predictions with experimental velocity data collected by Particle Image Velocimetry, while the applicability of the model to properly predict the separation process under a wide range of operating conditions is assessed through a strict comparison with permeation experimental data. Then, the effect of numerical issues on the RANS-based predictions of single phase stirred tanks is analysed. The homogenisation process of a scalar tracer is also investigated and simulation results are compared to original passive tracer homogenisation curves determined with Planar Laser Induced Fluorescence. The capability of a CFD approach based on the solution of RANS equations is also investigated for describing the fluid dynamic characteristics of the dispersion of organics in water. Finally, an Eulerian-Eulerian fluid-dynamic model is used to simulate mono-disperse suspensions of Geldart A Group particles fluidized by a Newtonian incompressible fluid as well as binary segregating fluidized beds of particles differing in size and density. The results obtained under a number of different operating conditions are compared with literature experimental data and the effect of numerical uncertainties on axial segregation is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.