13 resultados para deep level centres

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

III-nitride materials are very promising for high speed electronics/optical applications but still suffer in performance due to problems during high quality epitaxial growth, evolution of dislocation and defects, less understanding of fundamental physics of materials/processing of devices etc. This thesis mainly focus on GaN based heterostructures to understand the metal-semiconductor interface properties, 2DE(H)G influence on electrical and optical properties, and deep level states in GaN and InAlN, InGaN materials. The detailed electrical characterizations have been employed on Schottky diodes at GaN and InAl(Ga)N/GaN heterostructures in order to understand the metal-semiconductor interface related properties in these materials. I have observed the occurrence of Schottky barrier inhomogenity, role of dislocations in terms of leakage and creating electrically active defect states within energy gap of materials. Deep level transient spectroscopy method is employed on GaN, InAlN and InGaN materials and several defect levels have been observed related to majority and minority carriers. In fact, some defects have been found common in characteristics in ternary layers and GaN layer which indicates that those defect levels are from similar origin, most probably due to Ga/N vacancy in GaN/heterostructures. The role of structural defects, roughness has been extensively understood in terms of enhancing the reverse leakage current, suppressing the mobility in InAlN/AlN/GaN based high electron mobility transistor (HEMT) structures which are identified as key issues for GaN technology. Optical spectroscopy methods have been employed to understand materials quality, sub band and defect related transitions and compared with electrical characterizations. The observation of 2DEG sub band related absorption/emission in optical spectra have been identified and proposed for first time in nitride based polar heterostructures, which is well supported with simulation results. In addition, metal-semiconductor-metal (MSM)-InAl(Ga)N/GaN based photodetector structures have been fabricated and proposed for achieving high efficient optoelectronics devices in future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interaction between disciplines in the study of human population history is of primary importance, profiting from the biological and cultural characteristics of humankind. In fact, data from genetics, linguistics, archaeology and cultural anthropology can be combined to allow for a broader research perspective. This multidisciplinary approach is here applied to the study of the prehistory of sub-Saharan African populations: in this continent, where Homo sapiens originally started his evolution and diversification, the understanding of the patterns of human variation has a crucial relevance. For this dissertation, molecular data is interpreted and complemented with a major contribution from linguistics: linguistic data are compared to the genetic data and the research questions are contextualized within a linguistic perspective. In the four articles proposed, we analyze Y chromosome SNPs and STRs profiles and full mtDNA genomes on a representative number of samples to investigate key questions of African human variability. Some of these questions address i) the amount of genetic variation on a continental scale and the effects of the widespread migration of Bantu speakers, ii) the extent of ancient population structure, which has been lost in present day populations, iii) the colonization of the southern edge of the continent together with the degree of population contact/replacement, and iv) the prehistory of the diverse Khoisan ethnolinguistic groups, who were traditionally understudied despite representing one of the most ancient divergences of modern human phylogeny. Our results uncover a deep level of genetic structure within the continent and a multilayered pattern of contact between populations. These case studies represent a valuable contribution to the debate on our prehistory and open up further research threads.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Semiconductor nanowires (NWs) are one- or quasi one-dimensional systems whose physical properties are unique as compared to bulk materials because of their nanoscaled sizes. They bring together quantum world and semiconductor devices. NWs-based technologies may achieve an impact comparable to that of current microelectronic devices if new challenges will be faced. This thesis primarily focuses on two different, cutting-edge aspects of research over semiconductor NW arrays as pivotal components of NW-based devices. The first part deals with the characterization of electrically active defects in NWs. It has been elaborated the set-up of a general procedure which enables to employ Deep Level Transient Spectroscopy (DLTS) to probe NW arrays’ defects. This procedure has been applied to perform the characterization of a specific system, i.e. Reactive Ion Etched (RIE) silicon NW arrays-based Schottky barrier diodes. This study has allowed to shed light over how and if growth conditions introduce defects in RIE processed silicon NWs. The second part of this thesis concerns the bowing induced by electron beam and the subsequent clustering of gallium arsenide NWs. After a justified rejection of the mechanisms previously reported in literature, an original interpretation of the electron beam induced bending has been illustrated. Moreover, this thesis has successfully interpreted the formation of NW clusters in the framework of the lateral collapse of fibrillar structures. These latter are both idealized models and actual artificial structures used to study and to mimic the adhesion properties of natural surfaces in lizards and insects (Gecko effect). Our conclusion are that mechanical and surface properties of the NWs, together with the geometry of the NW arrays, play a key role in their post-growth alignment. The same parameters open, then, to the benign possibility of locally engineering NW arrays in micro- and macro-templates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with two important research aspects concerning radio frequency (RF) microresonators and switches. First, a new approach for compact modeling and simulation of these devices is presented. Then, a combined process flow for their simultaneous fabrication on a SOI substrate is proposed. Compact models for microresonators and switches are extracted by applying mathematical model order reduction (MOR) to the devices finite element (FE) description in ANSYS c° . The behaviour of these devices includes forms of nonlinearities. However, an approximation in the creation of the FE model is introduced, which enables the use of linear model order reduction. Microresonators are modeled with the introduction of transducer elements, which allow for direct coupling of the electrical and mechanical domain. The coupled system element matrices are linearized around an operating point and reduced. The resulting macromodel is valid for small signal analysis around the bias point, such as harmonic pre-stressed analysis. This is extremely useful for characterizing the frequency response of resonators. Compact modelling of switches preserves the nonlinearity of the device behaviour. Nonlinear reduced order models are obtained by reducing the number of nonlinearities in the system and handling them as input to the system. In this way, the system can be reduced using linear MOR techniques and nonlinearities are introduced directly in the reduced order model. The reduction of the number of system nonlinearities implies the approximation of all distributed forces in the model with lumped forces. Both for microresonators and switches, a procedure for matrices extraction has been developed so that reduced order models include the effects of electrical and mechanical pre-stress. The extraction process is fast and can be done automatically from ANSYS binary files. The method has been applied for the simulation of several devices both at devices and circuit level. Simulation results have been compared with full model simulations, and, when available, experimental data. Reduced order models have proven to conserve the accuracy of finite element method and to give a good description of the overall device behaviour, despite the introduced approximations. In addition, simulation is very fast, both at device and circuit level. A combined process-flow for the integrated fabrication of microresonators and switches has been defined. For this purpose, two processes that are optimized for the independent fabrication of these devices are merged. The major advantage of this process is the possibility to create on-chip circuit blocks that include both microresonators and switches. An application is, for example, aswitched filter bank for wireless transceiver. The process for microresonators fabrication is characterized by the use of silicon on insulator (SOI) wafers and on a deep reactive ion etching (DRIE) step for the creation of the vibrating structures in single-crystal silicon and the use of a sacrificial oxide layer for the definition of resonator to electrode distance. The fabrication of switches is characterized by the use of two different conductive layers for the definition of the actuation electrodes and by the use of a photoresist as a sacrificial layer for the creation of the suspended structure. Both processes have a gold electroplating step, for the creation of the resonators electrodes, transmission lines and suspended structures. The combined process flow is designed such that it conserves the basic properties of the original processes. Neither the performance of the resonators nor the performance of the switches results affected by the simultaneous fabrication. Moreover, common fabrication steps are shared, which allows for cheaper and faster fabrication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sea-level variability is characterized by multiple interacting factors described in the Fourth Assessment Report (Bindoff et al., 2007) of the Intergovernmental Panel on Climate Change (IPCC) that act over wide spectra of temporal and spatial scales. In Church et al. (2010) sea-level variability and changes are defined as manifestations of climate variability and change. The European Environmental Agency (EEA) defines sea level as one of most important indicators for monitoring climate change, as it integrates the response of different components of the Earths system and is also affected by anthropogenic contributions (EEA, 2011). The balance between the different sea-level contributions represents an important source of uncertainty, involving stochastic processes that are very difficult to describe and understand in detail, to the point that they are defined as an enigma in Munk (2002). Sea-level rate estimates are affected by all these uncertainties, in particular if we look at possible responses to sea-level contributions to future climate. At the regional scale, lateral fluxes also contribute to sea-level variability, adding complexity to sea-level dynamics. The research strategy adopted in this work to approach such an interesting and challenging topic has been to develop an objective methodology to study sea-level variability at different temporal and spatial scales, applicable in each part of the Mediterranean basin in particular, and in the global ocean in general, using all the best calibrated sources of data (for the Mediterranean): in-situ, remote-sensig and numerical models data. The global objective of this work was to achieve a deep understanding of all of the components of the sea-level signal contributing to sea-level variability, tendency and trend and to quantify them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cation chloride cotransporters (CCCs) represent a vital family of ion transporters, with several members implicated in significant neurological disorders. Specifically, conditions such as cerebrospinal fluid accumulation, epilepsy, Down’s syndrome, Asperger’s syndrome, and certain cancers have been attributed to various CCCs. This thesis delves into these pharmacological targets using advanced computational methodologies. I primarily employed GPU-accelerated all-atom molecular dynamics simulations, deep learning-based collective variables, enhanced sampling methods, and custom Python scripts for comprehensive simulation analyses. Our research predominantly centered on KCC1 and NKCC1 transporters. For KCC1, I examined its equilibrium dynamics in the presence/absence of an inhibitor and assessed the functional implications of different ion loading states. In contrast, our work on NKCC1 revealed its unique alternating access mechanism, termed the rocking-bundle mechanism. I identified a previously unobserved occluded state and demonstrated the transporter's potential for water permeability under specific conditions. Furthermore, I confirmed the actual water flow through its permeable states. In essence, this thesis leverages cutting-edge computational techniques to deepen our understanding of the CCCs, a family of ion transporters with profound clinical significance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance of visual data and the push for robust AI are driving the need for automated visual sensemaking. Computer Vision (CV) faces growing demand for models that can discern not only what images "represent," but also what they "evoke." This is a demand for tools mimicking human perception at a high semantic level, categorizing images based on concepts like freedom, danger, or safety. However, automating this process is challenging due to entropy, scarcity, subjectivity, and ethical considerations. These challenges not only impact performance but also underscore the critical need for interoperability. This dissertation focuses on abstract concept-based (AC) image classification, guided by three technical principles: situated grounding, performance enhancement, and interpretability. We introduce ART-stract, a novel dataset of cultural images annotated with ACs, serving as the foundation for a series of experiments across four key domains: assessing the effectiveness of the end-to-end DL paradigm, exploring cognitive-inspired semantic intermediaries, incorporating cultural and commonsense aspects, and neuro-symbolic integration of sensory-perceptual data with cognitive-based knowledge. Our results demonstrate that integrating CV approaches with semantic technologies yields methods that surpass the current state of the art in AC image classification, outperforming the end-to-end deep vision paradigm. The results emphasize the role semantic technologies can play in developing both effective and interpretable systems, through the capturing, situating, and reasoning over knowledge related to visual data. Furthermore, this dissertation explores the complex interplay between technical and socio-technical factors. By merging technical expertise with an understanding of human and societal aspects, we advocate for responsible labeling and training practices in visual media. These insights and techniques not only advance efforts in CV and explainable artificial intelligence but also propel us toward an era of AI development that harmonizes technical prowess with deep awareness of its human and societal implications.