14 resultados para Ultimate Strength Analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The project aims to gather an understanding of additive manufacturing and other manufacturing 4.0 techniques with an eyesight for industrialization. First the internal material anisotropy of elements created with the most economically feasible FEM technique was established. An understanding of the main drivers for variability for AM was portrayed, with the focus on achieving material internal isotropy. Subsequently, a technique for deposition parameter optimization was presented, further procedure testing was performed following other polymeric materials and composites. A replicability assessment by means of the use of technology 4.0 was proposed, and subsequent industry findings gathered the ultimate need of developing a process that demonstrate how to re-engineer designs in order to show the best results with AM processing. The latest study aims to apply the Industrial Design and Structure Method (IDES) and applying all the knowledge previously stacked into fully reengineer a product with focus of applying tools from 4.0 era, from product feasibility studies, until CAE – FEM analysis and CAM – DfAM. These results would help in making AM and FDM processes a viable option to be combined with composites technologies to achieve a reliable, cost-effective manufacturing method that could also be used for mass market, industry applications.
Resumo:
Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.
Resumo:
This thesis is a part of a larger study about the characterization of mechanical and histomorphometrical properties of bone. The main objects of this study were the bone tissue properties and its resistance to mechanical loads. Moreover, the knowledge about the equipment selected to carry out the analyses, the micro-computed tomography (micro-CT), was improved. Particular attention was given to the reliability over time of the measuring instrument. In order to understand the main characteristics of bone mechanical properties a study of the skeletal, the bones of which it is composed and biological principles that drive their formation and remodelling, was necessary. This study has led to the definition of two macro-classes describing the main components responsible for the resistance to fracture of bone: quantity and quality of bone. The study of bone quantity is the current clinical standard measure for so-called bone densitometry, and research studies have amply demonstrated that the amount of tissue is correlated with its mechanical properties of elasticity and fracture. However, the models presented in the literature, including information on the mere quantity of tissue, have often been limited in describing the mechanical behaviour. Recent investigations have underlined that also the bone-structure and the tissue-mineralization play an important role in the mechanical characterization of bone tissue. For this reason in this thesis the class defined as bone quality was mainly studied, splitting it into two sub-classes of bone structure and tissue quality. A study on bone structure was designed to identify which structural parameters, among the several presented in the literature, could be integrated with the information about quantity, in order to better describe the mechanical properties of bone. In this way, it was also possible to analyse the iteration between structure and function. It has been known for long that bone tissue is capable of remodeling and changing its internal structure according to loads, but the dynamics of these changes are still being analysed. This part of the study was aimed to identify the parameters that could quantify the structural changes of bone tissue during the development of a given disease: osteoarthritis. A study on tissue quality would have to be divided into different classes, which would require a scale of analysis not suitable for the micro-CT. For this reason the study was focused only on the mineralization of the tissue, highlighting the difference between bone density and tissue density, working in a context where there is still an ongoing scientific debate.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
The research for this PhD project consisted in the application of the RFs analysis technique to different data-sets of teleseismic events recorded at temporary and permanent stations located in three distinct study regions: Colli Albani area, Northern Apennines and Southern Apennines. We found some velocity models to interpret the structures in these regions, which possess very different geologic and tectonics characteristics and therefore offer interesting case study to face. In the Colli Albani some of the features evidenced in the RFs are shared by all the analyzed stations: the Moho is almost flat and is located at about 23 km depth, and the presence of a relatively shallow limestone layer is a stable feature; contrariwise there are features which vary from station to station, indicating local complexities. Three seismic stations, close to the central part of the former volcanic edifice, display relevant anisotropic signatures with symmetry axes consistent with the emplacement of the magmatic chamber. Two further anisotropic layers are present at greater depth, in the lower crust and the upper mantle, respectively, with symmetry axes directions related to the evolution of the volcano complex. In Northern Apennines we defined the isotropic structure of the area, finding the depth of the Tyrrhenian (almost 25 km and flat) and Adriatic (40 km and dipping underneath the Apennines crests) Mohos. We determined a zone in which the two Mohos overlap, and identified an anisotropic body in between, involved in the subduction and going down with the Adiratic Moho. We interpreted the downgoing anisotropic layer as generated by post-subduction delamination of the top-slab layer, probably made of metamorphosed crustal rocks caught in the subduction channel and buoyantly rising toward the surface. In the Southern Apennines, we found the Moho depth for 16 seismic stations, and highlighted the presence of an anisotropic layer underneath each station, at about 15-20 km below the whole study area. The moho displays a dome-like geometry, as it is shallow (29 km) in the central part of the study area, whereas it deepens peripherally (down to 45 km); the symmetry axes of anisotropic layer, interpreted as a layer separating the upper and the lower crust, show a moho-related pattern, indicated by the foliation of the layer which is parallel to the Moho trend. Moreover, due to the exceptional seismic event occurred on April 6th next to L’Aquila town, we determined the Vs model for two station located next to the epicenter. An extremely high velocity body is found underneath AQU station at 4-10 km depth, reaching Vs of about 4 km/s, while this body is lacking underneath FAGN station. We compared the presence of this body with other recent works and found an anti-correlation between the high Vs body, the max slip patches and earthquakes distribution. The nature of this body is speculative since such high velocities are consistent with deep crust or upper mantle, but can be interpreted as a as high strength barrier of which the high Vs is a typical connotation.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
The increasingly strict regulations on greenhouse gas emissions make the fuel economy a pressing factor for automotive manufacturers. Lightweighting and engine downsizing are two strategies pursued to achieve the target. In this context, materials play a key role since these limit the engine efficiency and components weight, due to their acceptable thermo-mechanical loads. Piston is one of the most stressed engine components and it is traditionally made of Al alloys, whose weakness is to maintain adequate mechanical properties at high temperature due to overaging and softening. The enhancement in strength-to-weight ratio at high temperature of Al alloys had been investigated through two approaches: increase of strength at high temperature or reduction of the alloy density. Several conventional and high performance Al-Si and Al-Cu alloys have been characterized from a microstructural and mechanical point of view, investigating the effects of chemical composition, addition of transition elements and heat treatment optimization, in the specific temperature range for pistons operations. Among the Al-Cu alloys, the research outlines the potentialities of two innovative Al-Cu-Li(-Ag) alloys, typically adopted for structural aerospace components. Moreover, due to the increased probability of abnormal combustions in high performance spark-ignition engines, the second part of the dissertation deals with the study of knocking damages on Al pistons. Thanks to the cooperation with Ferrari S.p.A. and Fluid Machinery Research Group - Unibo, several bench tests have been carried out under controlled knocking conditions. Knocking damage mechanisms were investigated through failure analyses techniques, starting from visual analysis up to detailed SEM investigations. These activities allowed to relate piston knocking damage to engine parameters, with the final aim to develop an on-board knocking controller able to increase engine efficiency, without compromising engine functionality. Finally, attempts have been made to quantify the knock-induced damages, to provide a numerical relation with engine working conditions.
Resumo:
The primary aim was to evaluate the effect of 1-ethyl-3-(3-dimethylamino-propyl) carbodiimide (EDC) on endogenous enzymatic activity within radicular dentin and push-out bond strength of adhesively luted fiber posts, at baseline and after artificial aging. Additionally, the effect of different cementation strategies on endogenous enzymatic activity and fiber post retention was evaluated. The experiment was carried out on extracted human teeth, following endodontic treatment and fiber post cementation. Three cementation strategies were performed: resin cement in combination with etch-and-rinse (EAR) adhesive system, with self-etch (SE) system and self-adhesive (SE) cement. Each of the mentioned strategies had a control and experimental (EDC) group in which root canal was irrigated with 0.3M EDC for 1 minute. The push-out bond strength test was performed 24h after cementation and after 40.000 thermocycles. In order to investigate the effect of EDC and different cementation strategies, in situ zymography analyses of the resin-dentin interfaces were conducted. Statistical analyses were conducted with the software Stata 12.0 (Stata Corp, College Station, Texas, USA) and the significance was set for p<0.05. The results of statistical analysis (ANOVA) showed that the variables “EDC”, “root region” and “artificial aging” significantly influenced fiber posts’ retention to root canal (p<0.05). The highest values were observed in coronal third. The mean values observed after artificial aging were lower when compared to baseline, however EDC was effective in preserving bond strength. The level of enzymatic activity varied between the groups and EDC had a beneficial effect on silencing the enzymatic activity. Within the limitations of the study, it was concluded that the choice of cementation strategy did not influence posts’ retention, while EDC contributed to the preservation of bond strength after artificial aging and reduced enzymatic activity within radicular dentin. In vivo trials are necessary to confirm the results of this in vitro study.
Resumo:
Existing bridges built in the last 50 years face challenges due to states far different than those envisaged when they were designed, due to increased loads, ageing of materials, and poor maintenance. For post-tensioned bridges, the need emerged for reliable engineering tools for the evaluation of their capacity in case of steel corrosion due to lack of mortar injection. This can lead to sudden brittle collapses, highlighting the need for proper maintenance and monitoring. This thesis proposes a peak strength model for corroded strands, introducing a “group coefficient” that aims at considering corrosion variability in the wires constituting the strands. The application of the introduced model in a deterministic approach leads to the proposal of strength curves for corroded strands, which represent useful engineering tools for estimating their maximum strength considering both geometry of the corrosion and steel material parameters. Together with the proposed ultimate displacement curves, constitutive laws of the steel material reduced by the effects of corrosion can be obtained. The effects of corroded strands on post-tensioned beams can be evaluated through the reduced bending moment-curvature diagram accounting for these reduced stress-strain relationships. The application of the introduced model in a probabilistic approach allows to estimate peak strength probability functions and consecutive design-oriented safety factors to consider corrosion effects in safety assessment verifications. Both approaches consider two procedures that are based on the knowledge level of the corrosion in the strands. On the sidelines of this main research line, this thesis also presents a study of a seismic upgrading intervention of a case-study bridge through HDRB isolators providing a simplified procedure for the identification of the correct device. The study also investigates the effects due to the variability of the shear modulus of the rubber material of the HDRB isolators on the structural response of the isolated bridge.
Assessing brain connectivity through electroencephalographic signal processing and modeling analysis
Resumo:
Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.
Resumo:
Distributed argumentation technology is a computational approach incorporating argumentation reasoning mechanisms within multi-agent systems. For the formal foundations of distributed argumentation technology, in this thesis we conduct a principle-based analysis of structured argumentation as well as abstract multi-agent and abstract bipolar argumentation. The results of the principle-based approach of these theories provide an overview and guideline for further applications of the theories. Moreover, in this thesis we explore distributed argumentation technology using distributed ledgers. We envision an Intelligent Human-input-based Blockchain Oracle (IHiBO), an artificial intelligence tool for storing argumentation reasoning. We propose a decentralized and secure architecture for conducting decision-making, addressing key concerns of trust, transparency, and immutability. We model fund management with agent argumentation in IHiBO and analyze its compliance with European fund management legal frameworks. We illustrate how bipolar argumentation balances pros and cons in legal reasoning in a legal divorce case, and how the strength of arguments in natural language can be represented in structured arguments. Finally, we discuss how distributed argumentation technology can be used to advance risk management, regulatory compliance of distributed ledgers for financial securities, and dialogue techniques.