7 resultados para new methods

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The increasing attention to environmental issues of recent times encourages us to find new methods for the production of energy from renewable sources, and to improve existing ones, increasing their energy yield. Most of the waste and agricultural residues, with a high content of lignin and non-hydrolysable polymers, cannot be effectively transformed into biofuels with existing technology. The purpose of the study was to develop a new thermochemical/ biological process (named Py-AD) for the valorization of scarcely biodegradable substances. A complete continuous prototype was design built and run for 1 year. This consists into a slow pyrolysis system coupled with two sequential digesters and showed to produce a clean pyrobiogas (a biogas with significant amount of C2-C3 hydrocarbons and residual CO/H2), biochar and bio-oil. Py-AD yielded 31.7% w/w biochar 32.5% w/w oil and 24.8% w/w pyrobiogas. The oil condensate obtained was fractionated in its aqueous and organic fraction (87% and 13% respectively). Subsequently, the anaerobic digestion of aqueous fraction was tested in a UASB reactor, for 180 days, in increasing organic loading rate (OLR). The maximum convertible concentration without undergoing instability phenomena and with complete degradation of pyrogenic chemicals was 1.25 gCOD L digester-1 d-1. The final yield of biomethane was equal to 40% of the theoretical yield and with a noticeable additional production equal to 20% of volatile fatty acids. The final results confirm that anaerobic digestion can be used as a useful tool for cleaning of slow pyrolysis products (both gas and condensable fraction) and the obtaining of relatively clean pyrobiogas that could be directly used in internal combustion engine.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interpreter profession is currently changing: migration flows, the economic crisis and the fast development of ICTs brought unexpected changes in our societies and in traditional interpreting services all over. Remote interpreting (RI), which entails new methods such as videoconference interpreting and telephone interpreting (TI), has greatly developed and now sees interpreters working remotely and being connected to service users via videoconference set up or telephone calls. This dissertation aims at studying and analyzing the relevant aspects of interpreter-mediated telephone calls, describing the consequences for the interpreters in this new working field, as well as defining new strategies and techniques interpreters must develop in order to adjust to the new working context. For these purposes, the objectives of this dissertation are the following: to describe the settings in which RI is mostly used, to study the prominent consequences on interpreters and analyze real interpreter-mediated conversations. The dissertation deals with issues studied by the Shift project, a European project which aims at creating teaching materials for remote interpreting; the project started in 2015 and the University of Bologna and in particular the DIT - Department of Interpreting and Translation is the coordinating unit and promoting partner. This dissertation is divided into five chapters. Chapter 1 contains an outline of the major research related to RI and videoconference interpreting as well as a description of its main settings: healthcare, law, business economics and institution. Chapter 2 focuses on the physiological and psychological implications for interpreters working on RI. The concepts of absence, presence and remoteness are discussed; some opinions of professional interpreters and legal practitioners (LPs) concerning remote interpreting are offered as well. In chapter 3, telephone interpreting is presented; basic concepts of conversational analysis and prominent traits of interpreter-mediated calls are also explored. Chapter 4 presents the materials and methodology used for the analysis of data. The results, discussed in Chapter 5, show that telephone interpreting may be suitable for some specific contexts; however, it is clear that interpreters must get appropriate training before working in any form of RI. The dissertation finally offers suggestions for the implementation of training in RI for future interpreting students.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors is essential in performing localization. This makes the time of arrival (ToA) an important piece of information to retrieve from the AE signal. Generally, this is determined using statistical methods such as the Akaike Information Criterion (AIC) which is particularly prone to errors in the presence of noise. And given that the structures of interest are surrounded with harsh environments, a way to accurately estimate the arrival time in such noisy scenarios is of particular interest. In this work, two new methods are presented to estimate the arrival times of AE signals which are based on Machine Learning. Inspired by great results in the field, two models are presented which are Deep Learning models - a subset of machine learning. They are based on Convolutional Neural Network (CNN) and Capsule Neural Network (CapsNet). The primary advantage of such models is that they do not require the user to pre-define selected features but only require raw data to be given and the models establish non-linear relationships between the inputs and outputs. The performance of the models is evaluated using AE signals generated by a custom ray-tracing algorithm by propagating them on an aluminium plate and compared to AIC. It was found that the relative error in estimation on the test set was < 5% for the models compared to around 45% of AIC. The testing process was further continued by preparing an experimental setup and acquiring real AE signals to test on. Similar performances were observed where the two models not only outperform AIC by more than a magnitude in their average errors but also they were shown to be a lot more robust as compared to AIC which fails in the presence of noise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Worldwide, biodiversity is decreasing due to climate change, habitat fragmentation and agricultural intensification. Bees are essential crops pollinator, but their abundance and diversity are decreasing as well. For their conservation, it is necessary to assess the status of bee population. Field data collection methods are expensive and time consuming thus, recently, new methods based on remote sensing are used. In this study we tested the possibility of using flower cover diversity estimated by UAV images (FCD-UAV) to assess bee diversity and abundance in 10 agricultural meadows in the Netherlands. In order to do so, field data of flower and bee diversity and abundance were collected during a campaign in May 2021. Furthermore, RGB images of the areas have been collected using Unmanned Aerial Vehicle (UAV) and post-processed into orthomosaics. Lastly, Random Forest machine learning algorithm was applied to estimate FCD of the species detected in each field. Resulting FCD was expressed with Shannon and Simpson diversity indices, which were successively correlated to bee Shannon and Simpson diversity indices, abundance and species richness. The results showed a positive relationship between FCD-UAV and in-situ collected data about bee diversity, evaluated with Shannon index, abundance and species richness. The strongest relationship was found between FCD (Shannon Index) and bee abundance with R2=0.52. Following, good correlations were found with bee species richness (R2=0.39) and bee diversity (R2=0.37). R2 values of the relationship between FCD (Simpson Index) and bee abundance, species richness and diversity were slightly inferior (0.45, 0.37 and 0.35, respectively). Our results suggest that the proposed method based on the coupling of UAV imagery and machine learning for the assessment of flower species diversity could be developed into valuable tools for large-scale, standardized and cost-effective monitoring of flower cover and of the habitat quality for bees.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.