23 resultados para rank-based procedure

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present thesis focuses on elastic waves behaviour in ordinary structures as well as in acousto-elastic metamaterials via numerical and experimental applications. After a brief introduction on the behaviour of elastic guided waves in the framework of non-destructive evaluation (NDE) and structural health monitoring (SHM) and on the study of elastic waves propagation in acousto-elastic metamaterials, dispersion curves for thin-walled beams and arbitrary cross-section waveguides are extracted via Semi-Analytical Finite Element (SAFE) methods. Thus, a novel strategy tackling signal dispersion to locate defects in irregular waveguides is proposed and numerically validated. Finally, a time-reversal and laser-vibrometry based procedure for impact location is numerically and experimentally tested. In the second part, an introduction and a brief review of the basic definitions necessary to describe acousto-elastic metamaterials is provided. A numerical approach to extract dispersion properties in such structures is highlighted. Afterwards, solid-solid and solid-fluid phononic systems are discussed via numerical applications. In particular, band structures and transmission power spectra are predicted for 1P-2D, 2P-2D and 2P-3D phononic systems. In addition, attenuation bands in the ultrasonic as well as in the sonic frequency regimes are experimentally investigated. In the experimental validation, PZTs in a pitch-catch configuration and laser vibrometric measurements are performed on a PVC phononic plate in the ultrasonic frequency range and sound insulation index is computed for a 2P-3D phononic barrier in the sonic frequency range. In both cases the numerical-experimental results comparison confirms the existence of the numerical predicted band-gaps. Finally, the feasibility of an innovative passive isolation strategy based on giant elastic metamaterials is numerically proved to be practical for civil structures. In particular, attenuation of seismic waves is demonstrated via finite elements analyses. Further, a parametric study shows that depending on the soil properties, such an earthquake-proof barrier could lead to significant reduction of the superstructure displacement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Historical evidence shows that chemical, process, and Oil&Gas facilities where dangerous substances are stored or handled are target of deliberate malicious attacks (security attacks) aiming at interfering with normal operations. Physical attacks and cyber-attacks may generate events with consequences on people, property, and the surrounding environment that are comparable to those of major accidents caused by safety-related causes. The security aspects of these facilities are commonly addressed using Security Vulnerability/Risk Assessment (SVA/SRA) methodologies. Most of these methodologies are semi-quantitative and non-systematic approaches that strongly rely on expert judgment, leading to security assessments that are not reproducible. Moreover, they do not consider the synergies with the safety domain. The present 3-year research is aimed at filling the gap outlined by providing knowledge on security attacks, as well as rigorous and systematic methods supporting existing SVA/SRA studies suitable for the chemical, process, and Oil&Gas industry. The different nature of cyber and physical attacks resulted in the development of different methods for the two domains. The first part of the research was devoted to the development and statistical analysis of security databases that allowed to develop new knowledge and lessons learnt on security threats. Based on the obtained background, a Bow-Tie based procedure and two reverse-HazOp based methodologies were developed as hazard identification approaches for physical and cyber threats respectively. To support the quantitative estimation of the security risk, a quantitative procedure based on the Bayesian Network was developed allowing to calculate the probability of success of physical security attacks. All the developed methods have been applied to case studies addressing chemical, process and Oil&Gas facilities (offshore and onshore) proving the quality of the results that can be achieved in improving site security. Furthermore, the outcomes achieved allow to step forward in developing synergies and promoting integration among safety and security management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis focuses on automating the time-consuming task of manually counting activated neurons in fluorescent microscopy images, which is used to study the mechanisms underlying torpor. The traditional method of manual annotation can introduce bias and delay the outcome of experiments, so the author investigates a deep-learning-based procedure to automatize this task. The author explores two of the main convolutional-neural-network (CNNs) state-of-the-art architectures: UNet and ResUnet family model, and uses a counting-by-segmentation strategy to provide a justification of the objects considered during the counting process. The author also explores a weakly-supervised learning strategy that exploits only dot annotations. The author quantifies the advantages in terms of data reduction and counting performance boost obtainable with a transfer-learning approach and, specifically, a fine-tuning procedure. The author released the dataset used for the supervised use case and all the pre-training models, and designed a web application to share both the counting process pipeline developed in this work and the models pre-trained on the dataset analyzed in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of protein expression profiles for biomarker discovery in serum and in mammalian cell populations needs the continuous improvement and combination of proteins/peptides separation techniques, mass spectrometry, statistical and bioinformatic approaches. In this thesis work two different mass spectrometry-based protein profiling strategies have been developed and applied to liver and inflammatory bowel diseases (IBDs) for the discovery of new biomarkers. The first of them, based on bulk solid-phase extraction combined with matrix-assisted laser desorption/ionization - Time of Flight mass spectrometry (MALDI-TOF MS) and chemometric analysis of serum samples, was applied to the study of serum protein expression profiles both in IBDs (Crohn’s disease and ulcerative colitis) and in liver diseases (cirrhosis, hepatocellular carcinoma, viral hepatitis). The approach allowed the enrichment of serum proteins/peptides due to the high interaction surface between analytes and solid phase and the high recovery due to the elution step performed directly on the MALDI-target plate. Furthermore the use of chemometric algorithm for the selection of the variables with higher discriminant power permitted to evaluate patterns of 20-30 proteins involved in the differentiation and classification of serum samples from healthy donors and diseased patients. These proteins profiles permit to discriminate among the pathologies with an optimum classification and prediction abilities. In particular in the study of inflammatory bowel diseases, after the analysis using C18 of 129 serum samples from healthy donors and Crohn’s disease, ulcerative colitis and inflammatory controls patients, a 90.7% of classification ability and a 72.9% prediction ability were obtained. In the study of liver diseases (hepatocellular carcinoma, viral hepatitis and cirrhosis) a 80.6% of prediction ability was achieved using IDA-Cu(II) as extraction procedure. The identification of the selected proteins by MALDITOF/ TOF MS analysis or by their selective enrichment followed by enzymatic digestion and MS/MS analysis may give useful information in order to identify new biomarkers involved in the diseases. The second mass spectrometry-based protein profiling strategy developed was based on a label-free liquid chromatography electrospray ionization quadrupole - time of flight differential analysis approach (LC ESI-QTOF MS), combined with targeted MS/MS analysis of only identified differences. The strategy was used for biomarker discovery in IBDs, and in particular of Crohn’s disease. The enriched serum peptidome and the subcellular fractions of intestinal epithelial cells (IECs) from healthy donors and Crohn’s disease patients were analysed. The combining of the low molecular weight serum proteins enrichment step and the LCMS approach allowed to evaluate a pattern of peptides derived from specific exoprotease activity in the coagulation and complement activation pathways. Among these peptides, particularly interesting was the discovery of clusters of peptides from fibrinopeptide A, Apolipoprotein E and A4, and complement C3 and C4. Further studies need to be performed to evaluate the specificity of these clusters and validate the results, in order to develop a rapid serum diagnostic test. The analysis by label-free LC ESI-QTOF MS differential analysis of the subcellular fractions of IECs from Crohn’s disease patients and healthy donors permitted to find many proteins that could be involved in the inflammation process. Among them heat shock protein 70, tryptase alpha-1 precursor and proteins whose upregulation can be explained by the increased activity of IECs in Crohn’s disease were identified. Follow-up studies for the validation of the results and the in-depth investigation of the inflammation pathways involved in the disease will be performed. Both the developed mass spectrometry-based protein profiling strategies have been proved to be useful tools for the discovery of disease biomarkers that need to be validated in further studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aims of my PhD research work have been the investigation of the redox, photophysical and electronic properties of carbon nanotubes (CNT) and their possible uses as functional substrates for the (electro)catalytic production of oxygen and as molecular connectors for Quantum-dot Molecular Automata. While for CNT many and diverse applications in electronics, in sensors and biosensors field, as a structural reinforcing in composite materials have long been proposed, the study of their properties as individual species has been for long a challenging task. CNT are in fact virtually insoluble in any solvent and, for years, most of the studies has been carried out on bulk samples (bundles). In Chapter 2 an appropriate description of carbon nanotubes is reported, about their production methods and the functionalization strategies for their solubilization. In Chapter 3 an extensive voltammetric and vis-NIR spectroelectrochemical investigation of true solutions of unfunctionalized individual single wall CNT (SWNT) is reported that permitted to determine for the first time the standard electrochemical potentials of reduction and oxidation as a function of the tube diameter of a large number of semiconducting SWNTs. We also established the Fermi energy and the exciton binding energy for individual tubes in solution and, from the linear correlation found between the potentials and the optical transition energies, one to calculate the redox potentials of SWNTs that are insufficiently abundant or absent in the samples. In Chapter 4 we report on very efficient and stable nano-structured, oxygen-evolving anodes (OEA) that were obtained by the assembly of an oxygen evolving polyoxometalate cluster, (a totally inorganic ruthenium catalyst) with a conducting bed of multiwalled carbon nanotubes (MWCNT). Here, MWCNT were effectively used as carrier of the polyoxometallate for the electrocatalytic production of oxygen and turned out to greatly increase both the efficiency and stability of the device avoiding the release of the catalysts. Our bioinspired electrode addresses the major challenge of artificial photosynthesis, i.e. efficient water oxidation, taking us closer to when we might power the planet with carbon-free fuels. In Chapter 5 a study on surface-active chiral bis-ferrocenes conveniently designed in order to act as prototypical units for molecular computing devices is reported. Preliminary electrochemical studies in liquid environment demonstrated the capability of such molecules to enter three indistinguishable oxidation states. Side chains introduction allowed to organize them in the form of self-assembled monolayers (SAM) onto a surface and to study the molecular and redox properties on solid substrates. Electrochemical studies on SAMs of these molecules confirmed their attitude to undergo fast (Nernstian) electron transfer processes generating, in the positive potential region, either the full oxidized Fc+-Fc+ or the partly oxidized Fc+-Fc species. Finally, in Chapter 6 we report on a preliminary electrochemical study of graphene solutions prepared according to an original procedure recently described in the literature. Graphene is the newly-born of carbon nanomaterials and is certainly bound to be among the most promising materials for the next nanoelectronic generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research performed during the PhD and presented in this thesis, allowed to make judgments on pushover analysis method about its application in evaluating the correct structural seismic response. In this sense, the extensive critical review of existing pushover procedures (illustrated in chapter 1) outlined their major issues related to assumptions and to hypothesis made in the application of the method. Therefore, with the purpose of evaluate the effectiveness of pushover procedures, a wide numerical investigation have been performed. In particular the attention has been focused on the structural irregularity on elevation, on the choice of the load vector and on its updating criteria. In the study eight pushover procedures have been considered, of which four are conventional type, one is multi-modal, and three are adaptive. The evaluation of their effectiveness in the identification of the correct dynamic structural response, has been done by performing several dynamic and static non-linear analysis on eight RC frames, characterized by different proprieties in terms of regularity in elevation. The comparisons of static and dynamic results have then permitted to evaluate the examined pushover procedures and to identify the expected margin of error by using each of them. Both on base shear-top displacement curves and on considered storey parameters, the best agreement with the dynamic response has been noticed on Multi-Modal Pushover procedure. Therefore the attention has been focused on Displacement-based Adative Pushover, coming to define for it an improvement strategy, and on modal combination rules, advancing an innovative method based on a quadratic combination of the modal shapes (QMC). This latter has been implemented in a conventional pushover procedure, whose results have been compared with those obtained by other multi-modal procedures. The development of research on pushover analysis is very important because the objective is to come to the definition of a simple, effective and reliable analysis method, indispensable tool in the seismic evaluation of new or existing structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 3-UPU three degrees of freedom fully parallel manipulator, where U and P are for universal and prismatic pair respectively, is a very well known manipulator that can provide the platform with three degrees of freedom of pure translation, pure rotation or mixed translation and rotation with respect to the base, according to the relative directions of the revolute pair axes (each universal pair comprises two revolute pairs with intersecting and perpendicular axes). In particular, pure translational parallel 3-UPU manipulators (3-UPU TPMs) received great attention. Many studies have been reported in the literature on singularities, workspace, and joint clearance influence on the platform accuracy of this manipulator. However, much work has still to be done to reveal all the features this topology can offer to the designer when different architecture, i.e. different geometry are considered. Therefore, this dissertation will focus on this type of the 3-UPU manipulators. The first part of the dissertation presents six new architectures of the 3-UPU TPMs which offer interesting features to the designer. In the second part, a procedure is presented which is based on some indexes, in order to allows the designer to select the best architecture of the 3-UPU TPMs for a given task. Four indexes are proposed as stiffness, clearance, singularity and size of the manipulator in order to apply the procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout the alpine domain, shallow landslides represent a serious geologic hazard, often causing severe damages to infrastructures, private properties, natural resources and in the most catastrophic events, threatening human lives. Landslides are a major factor of landscape evolution in mountainous and hilly regions and represent a critical issue for mountainous land management, since they cause loss of pastoral lands. In several alpine contexts, shallow landsliding distribution is strictly connected to the presence and condition of vegetation on the slopes. With the aid of high-resolution satellite images, it's possible to divide automatically the mountainous territory in land cover classes, which contribute with different magnitude to the stability of the slopes. The aim of this research is to combine EO (Earth Observation) land cover maps with ground-based measurements of the land cover properties. In order to achieve this goal, a new procedure has been developed to automatically detect grass mantle degradation patterns from satellite images. Moreover, innovative surveying techniques and instruments are tested to measure in situ the shear strength of grass mantle and the geomechanical and geotechnical properties of these alpine soils. Shallow landsliding distribution is assessed with the aid of physically based models, which use the EO-based map to distribute the resistance parameters across the landscape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for automatic scaling of oblique ionograms has been introduced. This method also provides a rejection procedure for ionograms that are considered to lack sufficient information, depicting a very good success rate. Observing the Kp index of each autoscaled ionogram, can be noticed that the behavior of the autoscaling program does not depend on geomagnetic conditions. The comparison between the values of the MUF provided by the presented software and those obtained by an experienced operator indicate that the procedure developed for detecting the nose of oblique ionogram traces is sufficiently efficient and becomes much more efficient as the quality of the ionograms improves. These results demonstrate the program allows the real-time evaluation of MUF values associated with a particular radio link through an oblique radio sounding. The automatic recognition of a part of the trace allows determine for certain frequencies, the time taken by the radio wave to travel the path between the transmitter and receiver. The reconstruction of the ionogram traces, suggests the possibility of estimating the electron density between the transmitter and the receiver, from an oblique ionogram. The showed results have been obtained with a ray-tracing procedure based on the integration of the eikonal equation and using an analytical ionospheric model with free parameters. This indicates the possibility of applying an adaptive model and a ray-tracing algorithm to estimate the electron density in the ionosphere between the transmitter and the receiver An additional study has been conducted on a high quality ionospheric soundings data set and another algorithm has been designed for the conversion of an oblique ionogram into a vertical one, using Martyn's theorem. This allows a further analysis of oblique soundings, throw the use of the INGV Autoscala program for the automatic scaling of vertical ionograms.