12 resultados para second pre-image attack

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research performed during the PhD candidature was intended to evaluate the quality of white wines, as a function of the reduction in SO2 use during the first steps of the winemaking process. In order to investigate the mechanism and intensity of interactions occurring between lysozyme and the principal macro-components of musts and wines, a series of experiments on model wine solutions were undertaken, focusing attention on the polyphenols, SO2, oenological tannins, pectines, ethanol, and sugar components. In the second part of this research program, a series of conventional sulphite added vinifications were compared to vinifications in which sulphur dioxide was replaced by lysozyme and consequently define potential winemaking protocols suitable for the production of SO2-free wines. To reach the final goal, the technological performance of two selected yeast strains with a low aptitude to produce SO2 during fermentation were also evaluated. The data obtained suggested that the addition of lysozyme and oenological tannins during the alcoholic fermentation could represent a promising alternative to the use of sulphur dioxide and a reliable starting point for the production of SO2-free wines. The different vinification protocols studied influenced the composition of the volatile profile in wines at the end of the alcoholic fermentation, especially with regards to alcohols and ethyl esters also a consequence of the yeast’s response to the presence or absence of sulphites during fermentation, contributing in different ways to the sensory profiles of wines. In fact, the aminoacids analysis showed that lysozyme can affect the consumption of nitrogen as a function of the yeast strain used in fermentation. During the bottle storage, the evolution of volatile compounds is affected by the presence of SO2 and oenological tannins, confirming their positive role in scaveging oxygen and maintaining the amounts of esters over certain levels, avoiding a decline in the wine’s quality. Even though a natural decrease was found on phenolic profiles due to oxidation effects caused by the presence of oxygen dissolved in the medium during the storage period, the presence of SO2 together with tannins contrasted the decay of phenolic content at the end of the fermentation. Tannins also showed a central role in preserving the polyphenolic profile of wines during the storage period, confirming their antioxidant property, acting as reductants. Our study focused on the fundamental chemistry relevant to the oxidative phenolic spoilage of white wines has demonstrated the suitability of glutathione to inhibit the production of yellow xanthylium cation pigments generated from flavanols and glyoxylic acid at the concentration that it typically exists in wine. The ability of glutathione to bind glyoxylic acid rather than acetaldehyde may enable glutathione to be used as a ‘switch’ for glyoxylic acid-induced polymerisation mechanisms, as opposed to the equivalent acetaldehyde polymerisation, in processes such as microoxidation. Further research is required to assess the ability of glutathione to prevent xanthylium cation production during the in-situ production of glyoxylic acid and in the presence of sulphur dioxide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the first part of the research activity was to develop an aerobic cometabolic process in packed bed reactors (PBR) to treat real groundwater contaminated by trichloroethylene (TCE) and 1,1,2,2-tetrachloroethane (TeCA). In an initial screening conducted in batch bioreactors, different groundwater samples from 5 wells of the contaminated site were fed with 5 growth substrates. The work led to the selection of butane as the best growth substrate, and to the development and characterization from the site’s indigenous biomass of a suspended-cell consortium capable to degrade TCE with a 90 % mineralization of the organic chlorine. A kinetic study conducted in batch and continuous flow PBRs and led to the identification of the best carrier. A kinetic study of butane and TCE biodegradation indicated that the attached-cell consortium is characterized by a lower TCE specific degredation rates and by a lower level of mutual butane-TCE inhibition. A 31 L bioreactor was designed and set up for upscaling the experiment. The second part of the research focused on the biodegradation of 4 polymers, with and with-out chemical pre-treatments: linear low density polyethylene (LLDPE), polyethylene (PP), polystyrene (PS) and polyvinyl chloride (PVC). Initially, the 4 polymers were subjected to different chemical pre-treatments: ozonation and UV/ozonation, in gaseous and aqueous phase. It was found that, for LLDPE and PP, the coupling UV and ozone in gas phase is the most effective way to oxidize the polymers and to generate carbonyl groups on the polymer surface. In further tests, the effect of chemical pretreatment on polyner biodegrability was studied. Gas-phase ozonated and virgin polymers were incubated aerobically with: (a) a pure strain, (b) a mixed culture of bacteria; and (c) a fungal culture, together with saccharose as a co-substrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our scope in this thesis is to propose architectures of CNNs in such a way to model the early visual pathway, including the Lateral Geniculate Nucleus and the Horizontal Connectivity of the primary visual cortex. Moreover, we will show how cortically inspired architectures allow to perform contrast perceptual invariance as well as grouping and the emergence of visual percepts. Particularly, the LGN is modeled with a first layer l0 containing a single filter Ψ0 that pre-filters the image I. Since the RPs of the LGN cells can be modeled as a LoG, we expect to obtain a radially symmetric filter with a similar shape; to this end, we prove the rotational invariance of Ψ0 and we study the influence of this filter to the subsequent layer. Indeed, we compare the statistic distribution of the filters in the second layer l1 of our architecture with the statistic distribution of the RPs of V1 cells of a macaque. Then, we model the horizontal connectivity of V1 implementing a transition kernel K1 to the layer l1. In this setting, we study the vector fields and the association fields induced by the connectivity kernel K1. To this end, we first approximate the filters bank in l1 with a Gabor function and use the parameters just found to re-parameterize the kernel. Thanks to this step, the kernel is now re-parameterized into a sub-Riemmanian space R2 × S1. Now we are able to compare the vector and association fields induced by K1 with the models of the horizontal connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three dimensional (3D) printers of continuous fiber reinforced composites, such as MarkTwo (MT) by Markforged, can be used to manufacture such structures. To date, research works devoted to the study and application of flexible elements and CMs realized with MT printer are only a few and very recent. A good numerical and/or analytical tool for the mechanical behavior analysis of the new composites is still missing. In addition, there is still a gap in obtaining the material properties used (e.g. elastic modulus) as it is usually unknown and sensitive to printing parameters used (e.g. infill density), making the numerical simulation inaccurate. Consequently, the aim of this thesis is to present several work developed. The first is a preliminary investigation on the tensile and flexural response of Straight Beam Flexures (SBF) realized with MT printer and featuring different interlayer fiber volume-fraction and orientation, as well as different laminate position within the sample. The second is to develop a numerical analysis within the Carrera' s Unified Formulation (CUF) framework, based on component-wise (CW) approach, including a novel preprocessing tool that has been developed to account all regions printed in an easy and time efficient way. Among its benefits, the CUF-CW approach enables building an accurate database for collecting first natural frequencies modes results, then predicting Young' s modulus based on an inverse problem formulation. To validate the tool, the numerical results are compared to the experimental natural frequencies evaluated using a digital image correlation method. Further, we take the CUF-CW model and use static condensation to analyze smart structures which can be decomposed into a large number of similar components. Third, the potentiality of MT in combination with topology optimization and compliant joints design (CJD) is investigated for the realization of automated machinery mechanisms subjected to inertial loads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims: perioperative treatment is currently the gold standard approach for locally advanced gastric cancer (GC). Unfortunately, the phenomenon of patients dropping out of treatment has been frequently observed. The primary aims of this study were to verify if routine blood parameters, the inflammatory response markers, sarcopenia, and the depletion of adipose tissues were associated with compliance with neoadjuvant/perioperative chemotherapy. Methods and study design: sarcopenia and adipose indices were calculated with a CT scan before starting chemotherapy and before surgery. Blood samples were considered before the first and second cycles of chemotherapy. Results: A total of 84 patients with localized operable GC, were identified between September 2010 and January 2021. Forty-four patients (52.4%) did not complete the treatment according to the number of cycles planned/performed. Eight patients (9.5%) decided to suspend chemotherapy, seven patients (8.3%) discontinued because of clinical decision-making, 14 patients (16.7%) because of toxicity, and 15 patients (17.9%) for miscellaneous causes. Sarcopenia before starting chemotherapy was found to be present in 38 patients (50.7%) while it was in 47 patients (60%) at the CT scan before the gastrectomy. In multivariable analysis, both for changes tending to have a value of PLR at basal and in the second control a higher one than the cut-off (OR = 5.03, 95% CI: 1.34 - 18.89, p-value = 0.017), and for PLR which increased from a lower to a higher value in second control with respect to the cut off (OR = 4.64, 95% CI: 1.02 -21.02, p-value = 0.047) resulted associated with incomplete compliance. Conclusions: among the biological indicators, changes in the value of PLR with a tendency towards increasing compared to the cut-off appear to be an immediate indicator of incomplete compliance with neoadjuvant/perioperative treatment. More information is needed to reduce the causes of interruption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the criticism of Macedonian kingship in the ancient Iranian world. The question of indigenous opposition and resistance to the Greeks and Macedonians has been little addressed by ancient historians. The study therefore adopts a different, interdisciplinary perspective and seeks to understand where the utterly negative portrayal of Alexander and the Macedonians found in most Iranian sources stems from. The first part deals with the subject by first examining the acts of violence committed by Alexander and his men against the Iranians during the expedition to Asia that might have led to such a portrayal in the Iranian sources. I have focused on looting, massacres and insults to deities, such as the looting of temples or the destruction of many settlements in ancient Iran handed down in classical sources. To this end, an important part was devoted to the analysis of archaeological sources, especially the signs of destruction in areas such as Persia and Sogdiana. In the second part, the image of Alexander and his successors, although mentioned much less frequently, as it appears in pre-Islamic Iranian literature, is analysed in detail, focusing on the faults and cruelties attributed to them against the Iranians, but especially against their religion. These are mostly Zoroastrian religious sources, whose clergy preserved a demonic image of the Macedonian kings. In the third and final part, further examples of offences committed by the Diadochi and Seleucids against the Iranians in the classical tradition are collected. At the same time, it is examined how the Hellenistic rulers of Iranian origin, e.g. the Arsacids and the Orontids, opposed not only militarily but also ideologically the Macedonian tradition represented by the kingdoms of Macedonian descent and chose a pro-Iranian tradition that was clearly different from the Greco-Roman one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa tesi è un contributo al dibattito meta-terminologico sull'uso scientifico del termine "monoteismo" in relazione alla religione dell'Israele antico. L'attenzione è rivolta principalmente a un tema specifico: l'esplorazione della nozione teistica di "esistenza" divina (implicita nell'uso di "monoteismo" come lente di osservazione) e il problema della sua applicazione alle concettualizzazioni della divinità che emergono nella Bibbia ebraica. In primo luogo, il "monoteismo" come termine e concetto viene ricondotto alle sue origini storiche nell'ambiente intellettuale del platonismo di Cambridge nell'Inghilterra del XVII secolo. Poi, si affronta il dibattito contemporaneo sull'uso del termine "monoteismo" in relazione alla religione dell'Israele antico e si evidenzia il ruolo dell'"esistenza" teistica come lente distorcente nella lettura dei testi biblici. La maggior parte della tesi sostiene questo assunto con una lettura esegetica dettagliata di tre passi biblici scelti come casi di studio: Sal 82; 1Re 18,20-40* e Zc 14,9. Queste analisi mostrano come la nozione teistica di un'esistenza divina astratta non sia in grado di spiegare la rappresentazione del divino che emerge da questi testi. Allo stesso tempo, il potere divino come categoria euristica viene proposto come un'alternativa più adatta a spiegare queste concettualizzazioni della divinità. L'ultima sezione elabora ulteriormente questi risultati. Qui la regalità di YHWH, come immagine metaforica del suo potere, viene utilizzata per descrivere i cambiamenti nella concettualizzazione di questa divinità. L'argomentazione finale è che in nessuna parte del materiale biblico affrontato in questa tesi si trova una nozione simile a quella di esistenza divina astratta. Poiché tale nozione è implicita nell'uso del termine "monoteismo", questi risultati richiedono una considerazione ancora più attenta del suo uso nel dibattito scientifico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.