892 resultados para Multi-extremal Objective Function
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
Background: Lymphangioleiomyomatosis (LAM), a rare progressive disease, is characterized by the proliferation of abnormal smooth muscle cells (LAM cells) in the lung, which leads to cystic parenchymal destruction and progressive respiratory failure. Estrogen receptors are present in LAM cells. LAM affects almost exclusively women of childbearing age. These findings, along with reports of disease progression during pregnancy or treatment with exogenous estrogens, have led to the assumption that hormonal factors play an important role in the pathogenesis of LAM. So, various therapies aim at preventing estrogen receptors (ER) by lowering circulating estrogen levels, by trying to block ER activity, or by attempting to lower ER expression in LAM. Prior experience have yielded conflicting results. Objective: The goal of this study was to evaluate, retrospectively, the effect of estrogen suppression in 21 patients with LAM. Design: We evaluated hormonal assays, pulmonary function tests and gas-exchange at baseline and after 12, 24 and 36 months after initiating hormonal manipulation. Results: The mean yearly rates of decline in FEV1 and DLCO are lower than those observed in prior studies and just DLCO decline was statistically significant. We also found an improvement of mean value of FVC and PaO2. Conclusions: Estrogen suppression appears to prevent decline in lung function in LAM.
Resumo:
The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
Stylolites are rough paired surfaces, indicative of localized stress-induced dissolution under a non-hydrostatic state of stress, separated by a clay parting which is believed to be the residuum of the dissolved rock. These structures are the most frequent deformation pattern in monomineralic rocks and thus provide important information about low temperature deformation and mass transfer. The intriguing roughness of stylolites can be used to assess amount of volume loss and paleo-stress directions, and to infer the destabilizing processes during pressure solution. But there is little agreement on how stylolites form and why these localized pressure solution patterns develop their characteristic roughness.rnNatural bedding parallel and vertical stylolites were studied in this work to obtain a quantitative description of the stylolite roughness and understand the governing processes during their formation. Adapting scaling approaches based on fractal principles it is demonstrated that stylolites show two self affine scaling regimes with roughness exponents of 1.1 and 0.5 for small and large length scales separated by a crossover length at the millimeter scale. Analysis of stylolites from various depths proved that this crossover length is a function of the stress field during formation, as analytically predicted. For bedding parallel stylolites the crossover length is a function of the normal stress on the interface, but vertical stylolites show a clear in-plane anisotropy of the crossover length owing to the fact that the in-plane stresses (σ2 and σ3) are dissimilar. Therefore stylolite roughness contains a signature of the stress field during formation.rnTo address the origin of stylolite roughness a combined microstructural (SEM/EBSD) and numerical approach is employed. Microstructural investigations of natural stylolites in limestones reveal that heterogeneities initially present in the host rock (clay particles, quartz grains) are responsible for the formation of the distinctive stylolite roughness. A two-dimensional numerical model, i.e. a discrete linear elastic lattice spring model, is used to investigate the roughness evolving from an initially flat fluid filled interface induced by heterogeneities in the matrix. This model generates rough interfaces with the same scaling properties as natural stylolites. Furthermore two coinciding crossover phenomena in space and in time exist that separate length and timescales for which the roughening is either balanced by surface or elastic energies. The roughness and growth exponents are independent of the size, amount and the dissolution rate of the heterogeneities. This allows to conclude that the location of asperities is determined by a polimict multi-scale quenched noise, while the roughening process is governed by inherent processes i.e. the transition from a surface to an elastic energy dominated regime.rn
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
T helper (Th) 9 cells are an important subpopulation of the CD4+ T helper cells. Due to their ability to secrete Interleukin-(IL-)9, Th9 cells essentially contribute to the expulsion of parasitic helminths from the intestinal tract but they play also an immunopathological role in the course of asthma. Recently, a beneficial function of Th9 cells in anti-tumor immune responses was published. In a murine melanoma tumor model Th9 cells were shown to enhance the anti-melanoma immune response via the recruitment of CD8+ T cells, dendritic cells and mast cells. In contrast to Th9 effector cells regulatory T cells (Tregs) are able to control an immune response with the aid of different suppressive mechanisms. Based on their ability to suppress an immune response Tregs are believed to be beneficial in asthma by diminishing excessive allergic reactions. However, concerning cancer they can have a detrimental function because Tregs inhibit an effective anti-tumor immune reaction. Thus, the analysis of Th9 suppression by Tregs is of central importance concerning the development of therapeutic strategies for the treatment of cancer and allergic diseases and was therefore the main objective of this PhD thesis.rnIn general it could be demonstrated that the development of Th9 cells can be inhibited by Tregs in vitro. The production of the lineage-specific cytokine IL-9 by developing Th9 cells was completely suppressed at a Treg/Th9 ratio of 1:1 on the transcriptional (qRT-PCR) as well as on the translational level (ELISA). In contrast, the expression of IRF4 that was found to strongly promote Th9 development was not reduced in the presence of Tregs, suggesting that IRF4 requires additional transcription factors to induce the differentiation of Th9 cells. In order to identify such factors, which regulate Th9 development and therefore represent potential targets for Treg-mediated suppressive mechanisms, a transcriptome analysis using “next-generation sequencing” was performed. The expression of some genes which were found to be up- or downregulated in Th9 cells in the presence of Tregs was validated with qRT-PCR. Time limitations prevented a detailed functional analysis of these candidate genes. Nevertheless, the analysis of the suppressive mechanisms revealed that Tregs probably suppress Th9 cells via the increase of the intracellular cAMP concentration. In contrast, IL-9 production by differentiated Th9 cells was only marginally affected by Tregs in vitro and in vivo analysis (asthma, melanoma model). Hence, Tregs represent very effective inhibitors of Th9 development whereas they have only a minimal suppressive influence on differentiated Th9 cells.rn
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.
Resumo:
Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.
Resumo:
La tesi, che si colloca all'interno di un progetto di esplorazione degli approcci alla programmazione multi-piattaforma tra Java e iOS, mira a proseguire ed ampliare lo studio del tool RoboVM, in particolare grazie allo sviluppo dell'applicazione iTuCSoN, porting del Command Line Interpreter contenuto in TuCSoN (http://tucson.apice.unibo.it/)
Resumo:
In this paper we propose a variational approach for multimodal image registration based on the diffeomorphic demons algorithm. Diffeomorphic demons has proven to be a robust and efficient way for intensity-based image registration. However, the main drawback is that it cannot deal with multiple modalities. We propose to replace the standard demons similarity metric (image intensity differences) by point-wise mutual information (PMI) in the energy function. By comparing the accuracy between our PMI based diffeomorphic demons and the B-Spline based free-form deformation approach (FFD) on simulated deformations, we show the proposed algorithm performs significantly better.
Resumo:
It has been suggested that there are several distinct phenotypes of childhood asthma or childhood wheezing. Here, we review the research relating to these phenotypes, with a focus on the methods used to define and validate them. Childhood wheezing disorders manifest themselves in a range of observable (phenotypic) features such as lung function, bronchial responsiveness, atopy and a highly variable time course (prognosis). The underlying causes are not sufficiently understood to define disease entities based on aetiology. Nevertheless, there is a need for a classification that would (i) facilitate research into aetiology and pathophysiology, (ii) allow targeted treatment and preventive measures and (iii) improve the prediction of long-term outcome. Classical attempts to define phenotypes have been one-dimensional, relying on few or single features such as triggers (exclusive viral wheeze vs. multiple trigger wheeze) or time course (early transient wheeze, persistent and late onset wheeze). These definitions are simple but essentially subjective. Recently, a multi-dimensional approach has been adopted. This approach is based on a wide range of features and relies on multivariate methods such as cluster or latent class analysis. Phenotypes identified in this manner are more complex but arguably more objective. Although phenotypes have an undisputed standing in current research on childhood asthma and wheezing, there is confusion about the meaning of the term 'phenotype' causing much circular debate. If phenotypes are meant to represent 'real' underlying disease entities rather than superficial features, there is a need for validation and harmonization of definitions. The multi-dimensional approach allows validation by replication across different populations and may contribute to a more reliable classification of childhood wheezing disorders and to improved precision of research relying on phenotype recognition, particularly in genetics. Ultimately, the underlying pathophysiology and aetiology will need to be understood to properly characterize the diseases causing recurrent wheeze in children.
Resumo:
A new simple method for two-dimensional determination of optical density of macular pigment xanthophyll (ODx) in clinical routine is based on a single blue-reflection fundus image. Individual different vignetting is corrected by a shading function. For its construction, nodes are automatically found in structureless image regions. The influence of stray light in elderly crystalline lenses is compensated by a correction function that depends on age. The reproducibility of parameters in a one-wavelength reflection method determined for three subjects (47, 61, and 78 years old) was: maxODx = 6.3%, meanODx = 4.6%, volume = 6%, and area = 6% already before stray-light correction. ODx was comparable in pseudophakic and in an eye with a crystalline lens of the same 11 subjects after stray-light correction. Significant correlation in ODx was found between the one-wavelength reflection method and the two-wavelength autofluorescence method for pseudophakic and cataract eyes of 19 patients suffering from dry age-related macular degeneration (AMD) (R(2) = 0.855). In pseudophakic eyes, maxODx was significantly lower for dry AMD (n = 45) (ODx = 0.491±0.102 ODU) than in eyes with healthy fundus (n = 22) (ODx = 0.615±0.103 ODU) (p = 0.000033). Also in eyes with crystalline lens, maxODx was lower in AMD (n = 125) (ODx = 0.610±0.093 ODU) than in healthy subjects (n = 45) (ODx = 0.674±0.098 ODU) (p = 0.00019). No dependence on age was found in the pseudophakic eyes both of healthy subjects and AMD patients.
Resumo:
OBJECTIVE: To evaluate the expression of the 5-hydroxytryptamine 4 (5-HT4) receptor subtype and investigate the modulating function of those receptors on contractility in intestinal tissues obtained from horses without gastrointestinal tract disease. SAMPLE POPULATION: Smooth muscle preparations from the duodenum, ileum, and pelvic flexure collected immediately after slaughter of 24 horses with no history or signs of gastrointestinal tract disease. PROCEDURES: In isometric organ baths, the contractile activities of smooth muscle preparations in response to 5-hydroxytryptamine and electric field stimulation were assessed; the effect of tegaserod alone or in combination with 5-hydroxytryptamine on contractility of intestinal specimens was also investigated. Presence and distribution of 5-HT4 receptors in intestinal tissues and localization on interstitial cells of Cajal were examined by use of an immunofluorescence technique. RESULTS: Widespread 5-HT4 receptor immunoreactivity was observed in all intestinal smooth muscle layers; 5-HT4 receptors were absent from the myenteric plexus and interstitial cells of Cajal. In electrical field-stimulated tissue preparations of duodenum and pelvic flexure, tegaserod increased the amplitude of smooth muscle contractions in a concentration-dependent manner. Preincubation with tegaserod significantly decreased the basal tone of the 5-HT-evoked contractility in small intestine specimens, compared with the effect of 5-HT alone, thereby confirming that tegaserod was acting as a partial agonist. CONCLUSIONS AND CLINICAL RELEVANCE: In horses, 5-HT4 receptors on smooth muscle cells appear to be involved in the contractile response of the intestinal tract to 5-hydroxytryptamine. Results suggest that tegaserod may be useful for treatment of reduced gastrointestinal tract motility in horses.