9 resultados para Limits of Sets

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of the Ph.D project focuses on the modelling of the soil-water dynamics inside an instrumented embankment section along Secchia River (Cavezzo (MO)) in the period from 2017 to 2018 and the quantification of the performance of the direct and indirect simulations . The commercial code Hydrus2D by Pc-Progress has been chosen to run the direct simulations. Different soil-hydraulic models have been adopted and compared. The parameters of the different hydraulic models are calibrated using a local optimization method based on the Levenberg - Marquardt algorithm implemented in the Hydrus package. The calibration program is carried out using different types of dataset of observation points, different weighting distributions, different combinations of optimized parameters and different initial sets of parameters. The final goal is an in-depth study of the potentialities and limits of the inverse analysis when applied to a complex geotechnical problem as the case study. The second part of the research focuses on the effects of plant roots and soil-vegetation-atmosphere interaction on the spatial and temporal distribution of pore water pressure in soil. The investigated soil belongs to the West Charlestown Bypass embankment, Newcastle, Australia, that showed in the past years shallow instabilities and the use of long stem planting is intended to stabilize the slope. The chosen plant species is the Malaleuca Styphelioides, native of eastern Australia. The research activity included the design and realization of a specific large scale apparatus for laboratory experiments. Local suction measurements at certain intervals of depth and radial distances from the root bulb are recorded within the vegetated soil mass under controlled boundary conditions. The experiments are then reproduced numerically using the commercial code Hydrus 2D. Laboratory data are used to calibrate the RWU parameters and the parameters of the hydraulic model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis objectives are to develop new methodologies for study of the space and time variability of Italian upper ocean ecosystem through the combined use of multi-sensors satellite data and in situ observations and to identify the capability and limits of remote sensing observations to monitor the marine state at short and long time scales. Three oceanographic basins have been selected and subjected to different types of analyses. The first region is the Tyrrhenian Sea where a comparative analysis of altimetry and lagrangian measurements was carried out to study the surface circulation. The results allowed to deepen the knowledge of the Tyrrhenian Sea surface dynamics and its variability and to defined the limitations of satellite altimetry measurements to detect small scale marine circulation features. Channel of Sicily study aimed to identify the spatial-temporal variability of phytoplankton biomass and to understand the impact of the upper ocean circulation on the marine ecosystem. An combined analysis of the satellite of long term time series of chlorophyll, Sea Surface Temperature and Sea Level field data was applied. The results allowed to identify the key role of the Atlantic water inflow in modulating the seasonal variability of the phytoplankton biomass in the region. Finally, Italian coastal marine system was studied with the objective to explore the potential capability of Ocean Color data in detecting chlorophyll trend in coastal areas. The most appropriated methodology to detect long term environmental changes was defined through intercomparison of chlorophyll trends detected by in situ and satellite. Then, Italian coastal areas subject to eutrophication problems were identified. This work has demonstrated that satellites data constitute an unique opportunity to define the features and forcing influencing the upper ocean ecosystems dynamics and can be used also to monitor environmental variables capable of influencing phytoplankton productivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Perfluoroalkylated substances are a group of chemicals that have been largely employed during the last 60 years in several applications, widely spreading and accumulating in the environment due to their extreme resistance to degradation. As a consequence, they have been found also in various types of food as well as in drinking water, proving that they can easily reach humans through the diet. The available information concerning their adverse effects on health has recently increased the interest towards these contaminants and highlighted the importance of investigating all the potential sources of human exposure, among which diet was proved to be the most relevant. This need has been underlined by the European Union through Recommendation 2010/161/EU: in this document, Member States were called to monitor their presence of in food, producing accurate estimations of human exposure. The purpose of the research presented in this thesis, which is the result of a partnership between an Italian and a French laboratory, was to develop reliable tools for the analysis of these pollutants in food, to be used for generating data on potentially contaminated matrices. An efficient method based on liquid chromatography-mass spectrometry for the detection of 16 different perfluorinated compounds in milk has been validated in accordance with current European regulation guidelines (2002/657/EC) and is currently under evaluation for ISO 17025 accreditation. The proposed technique was applied to cow, powder and human breast milk samples from Italy and France to produce a preliminary monitoring on the presence of these contaminants. In accordance with the above mentioned European Recommendation, this project led also to the development of a promising technique for the quantification of some precursors of these substances in fish. This method showed extremely satisfying performances in terms of linearity and limits of detection, and will be useful for future surveys.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last 20-30 years, the implementation of new technologies from the research centres to the food industry process was very fast. The infrared thermography is a tool used in many fields, including agriculture and food science technology, because of it's important qualities like non-destructive method, it is fast, it is accurate, it is repeatable and economical. Almost all the industrial food processors have to use the thermal process to obtain an optimal product respecting the quality and safety standards. The control of temperature of food products during the production, transportation, storage and sales is an essential process in the food industry network. This tool can minimize the human error during the control of heat operation, and reduce the costs with personal. In this thesis the application of infrared thermography (IRT) was studies for different products that need a thermal process during the food processing. The background of thermography was presented, and also some of its applications in food industry, with the benefits and limits of applicability. The measurement of the temperature of the egg shell during the heat treatment in natural convection and with hot-air treatment was compared with the calculated temperatures obtained by a simplified finite element model made in the past. The complete process shown a good results between calculated and observed temperatures and we can say that this technique can be useful to control the heat treatments for decontamination of egg using the infrared thermography. Other important application of IRT was to determine the evolution of emissivity of potato raw during the freezing process and the control non-destructive control of this process. We can conclude that the IRT can represent a real option for the control of thermal process from the food industry, but more researches on various products are necessary.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research is to provide empirical evidence on determinants of the economic use of patented inventions in order to contribute to the literature on technology and innovation management. The current work consists of three main parts, each of which constitutes a self-consistent research paper. The first paper uses a meta-analytic approach to review and synthesize the existing body of empirical research on the determinants of technology licensing. The second paper investigates the factors affecting the choice between the following alternative economic uses of patented inventions: pure internal use, pure licensing, and mixed use. Finally, the third paper explores the least studied option of the economic use of patented inventions, namely, the sale of patent rights. The data to empirically test the hypotheses come from a large-scale survey of European Patent inventors resident in 21 European countries, Japan, and US. The findings provided in this dissertation contribute to a better understanding of the economic use of patented inventions by expanding the limits of previous research in several different dimensions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

21 cm cosmology opens an observational window to previously unexplored cosmological epochs such as the Epoch of Reionization (EoR), the Cosmic Dawn and the Dark Ages using powerful radio interferometers such as the planned Square Kilometer Array (SKA). Among all the other applications which can potentially improve the understanding of standard cosmology, we study the promising opportunity given by measuring the weak gravitational lensing sourced by 21 cm radiation. We performed this study in two different cosmological epochs, at a typical EoR redshift and successively at a post-EoR redshift. We will show how the lensing signal can be reconstructed using a three dimensional optimal quadratic lensing estimator in Fourier space, using single frequency band or combining multiple frequency band measurements. To this purpose, we implemented a simulation pipeline capable of dealing with issues that can not be treated analytically. Considering the current SKA plans, we studied the performance of the quadratic estimator at typical EoR redshifts, for different survey strategies and comparing two thermal noise models for the SKA-Low array. The simulation we performed takes into account the beam of the telescope and the discreteness of visibility measurements. We found that an SKA-Low interferometer should obtain high-fidelity images of the underlying mass distribution in its phase 1 only if several bands are stacked together, covering a redshift range that goes from z=7 to z=11.5. The SKA-Low phase 2, modeled in order to improve the sensitivity of the instrument by almost an order of magnitude, should be capable of providing images with good quality even when the signal is detected within a single frequency band. Considering also the serious effect that foregrounds could have on this detections, we discussed the limits of these results and also the possibility provided by these models of measuring an accurate lensing power spectrum.