25 resultados para Top-down Control
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the last decades, global food supply chains had to deal with the increasing awareness of the stakeholders and consumers about safety, quality, and sustainability. In order to address these new challenges for food supply chain systems, an integrated approach to design, control, and optimize product life cycle is required. Therefore, it is essential to introduce new models, methods, and decision-support platforms tailored to perishable products. This thesis aims to provide novel practice-ready decision-support models and methods to optimize the logistics of food items with an integrated and interdisciplinary approach. It proposes a comprehensive review of the main peculiarities of perishable products and the environmental stresses accelerating their quality decay. Then, it focuses on top-down strategies to optimize the supply chain system from the strategical to the operational decision level. Based on the criticality of the environmental conditions, the dissertation evaluates the main long-term logistics investment strategies to preserve products quality. Several models and methods are proposed to optimize the logistics decisions to enhance the sustainability of the supply chain system while guaranteeing adequate food preservation. The models and methods proposed in this dissertation promote a climate-driven approach integrating climate conditions and their consequences on the quality decay of products in innovative models supporting the logistics decisions. Given the uncertain nature of the environmental stresses affecting the product life cycle, an original stochastic model and solving method are proposed to support practitioners in controlling and optimizing the supply chain systems when facing uncertain scenarios. The application of the proposed decision-support methods to real case studies proved their effectiveness in increasing the sustainability of the perishable product life cycle. The dissertation also presents an industry application of a global food supply chain system, further demonstrating how the proposed models and tools can be integrated to provide significant savings and sustainability improvements.
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
Nanoscience is an emerging and fast-growing field of science with the aim of manipulating nanometric objects with dimension below 100 nm. Top down approach is currently used to build these type of architectures (e.g microchips). The miniaturization process cannot proceed indefinitely due to physical and technical limitations. Those limits are focusing the interest on the bottom-up approach and construction of nano-objects starting from “nano-bricks” like atoms, molecules or nanocrystals. Unlike atoms, molecules can be “fully programmable” and represent the best choice to build up nanostructures. In the past twenty years many examples of functional nano-devices able to perform simple actions have been reported. Nanocrystals which are often considered simply nanostructured materials, can be active part in the development of those nano-devices, in combination with functional molecules. The object of this dissertation is the photophysical and photochemical investigation of nano-objects bearing molecules and semiconductor nanocrystals (QDs) as components. The first part focuses on the characterization of a bistable rotaxane. This study, in collaboration with the group of Prof. J.F. Stoddart (Northwestern University, Evanston, Illinois, USA) who made the synthesis of the compounds, shows the ability of this artificial machine to operate as bistable molecular-level memory under kinetic control. The second part concerns the study of the surface properties of luminescent semiconductor nanocrystals (QDs) and in particular the effect of acid and base on the spectroscopical properties of those nanoparticles. In this section is also reported the work carried out in the laboratory of Prof H. Mattoussi (Florida State University, Tallahassee, Florida, USA), where I developed a novel method for the surface decoration of QDs with lipoic acid-based ligands involving the photoreduction of the di-thiolane moiety.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
The globalization process of the last twenty years has changed the world through international flows of people, policies and practices. International cooperation to development is a part of that process and brought International Organizations (IOs) and Non Governmental Organizations (NGOs) from the West to the rest of the world. In my thesis I analyze the Italian NGOs that worked in Bosnia Herzegovina (BH) to understand which development projects they realized and how they faced the ethnic issue that characterized BH. I consider the relation shaped between Italian NGOs and Bosnian civil society as an object of ethnic interests. In BH, once part of former Yugoslavia, the transition from the communist regime to a democratic country has not been completed. BH’s social conditions are characterized by strong ethnic divisions. The legacy of the early 1990s crisis was a phenomenon of ethnic identities created before the war and that still endure today. The Dayton Peace Agreement signed in 1995 granted the peace and reinforced the inter-ethnic hate between the newly recognized three principal ethnicities: Serbs, Croats and Bosniak. Through the new constitution, the institutions were characterized by division at every level, from the top to the bottom of society. Besides it was the first constitution ever written and signed outside the own country; that was the root of the state of exception that characterized BH. Thus ethnic identities culture survived through the international political involvement. At the same time ethnic groups that dominated the political debate clashed with the international organization’s democratic purpose to build a multicultural and democratic state. Ethnic and also religious differences were the instruments for a national statement that might cause the transition and development projects failure. Fifteen years later social fragmentation was still present and it established an atmosphere of daily cultural violence. Civil society suffered this condition and attended to recreate the ethnic fragmentation in every day life. Some cities became physically divided and other cities don’t tolerated the minority presence. In rural areas, the division was more explicit, from village to village, without integration. In my speech, the anthropology for development – the derivative study from applied anthropology – constitutes the point of view that I used to understand how ethnic identities still influenced the development process in BH. I done ethnographic research about the Italian cooperation for development projects that were working there in 2007. The target of research were the Italian NGOs that created a relation with Bosnian civil society; they were almost twenty divided in four main field of competences: institutional building, education, agriculture and democratization. I assumed that NGOs work needed a deep study because the bottom of society is the place where people could really change their representation and behavior. Italian NGOs operated in BH with the aim of creating sustainable development. They found cultural barricade that both institutions and civil society erected when development projects have been applied. Ethnic and religious differences were stressed to maintain boundaries and fragmented power. Thus NGOs tried to negotiate development projects by social integration. I found that NGOs worked among ethnic groups by pursuing a new integration. They often gained success among people; civil society was ready to accept development projects and overcome differences. On the other hand NGOs have been limited by political level that sustained the ethnic talk and by their representation of Bosnian issue. Thus development policies have been impeded by ethnic issue and by cooperation practices established on a top down perspective. Paradoxically, since international community has approved the political ethnic division within DPA, then the willing of development followed by funding NGOs cooperation projects was not completely successful.
Resumo:
Most of current ultra-miniaturized devices are obtained by the top-down approach, in which nanoscale components are fabricated by cutting down larger precursors. Since this physical-engineering method is reaching its limits, especially for components below 30 nm in size, alternative strategies are necessary. Of particular appeal to chemists is the supramolecular bottom-up approach to nanotechnology, a methodology that utilizes the principles of molecular recognition to build materials and devices from molecular components. The subject of this thesis is the photophysical and electrochemical investigation of nanodevices obtained harnessing the principles of supramolecular chemistry. These systems operate in solution-based environments and are investigated at the ensemble level. The majority of the chemical systems discussed here are based on pseudorotaxanes and catenanes. Such supramolecular systems represent prototypes of molecular machines since they are capable of performing simple controlled mechanical movements. Their properties and operation are strictly related to the supramolecular interactions between molecular components (generally photoactive or electroactive molecules) and to the possibility of modulating such interactions by means of external stimuli. The main issues addressed throughout the thesis are: (i) the analysis of the factors that can affect the architecture and perturb the stability of supramolecular systems; (ii) the possibility of controlling the direction of supramolecular motions exploiting the molecular information content; (iii) the development of switchable supramolecular polymers starting from simple host-guest complexes; (iv) the capability of some molecular machines to process information at molecular level, thus behaving as logic devices; (v) the behaviour of molecular machine components in a biological-type environment; (vi) the study of chemically functionalized metal nanoparticles by second harmonic generation spectroscopy.
Resumo:
La tesi ha per oggetto lo studio delle politiche pubbliche locali ed in particolare delle politiche sociali che dal 2011 sono diventate politiche esclusivamente territoriali. L’obiettivo è quello di verificare se il differente orientamento politico delle amministrazioni genera politiche differenti. Per verificare le ipotesi si sono scelti 2 Comuni simili sul piano delle variabili socio-economiche, ma guidati da giunte con orientamento politico differente: il Comune di Modena a guida Partito Democratico e il Comune di Verona con un sindaco leghista a capo di una giunta di centro-destra. Nella prima parte vengono esposti ed analizzati i principali paradigmi di studio delle politiche (rational choice, paradigma marxista, economia del benessere, corporativismo e pluralismo, neo-istituzionalismo e paradigma relazionale) e viene presentato il paradigma che verrà utilizzato per l’analisi delle politiche (paradigma relazionale). Per la parte empirica si è proceduto attraverso interviste in profondità effettuate ai due Assessori alle Politiche sociali e ai due Dirigenti comunali dei Comuni e a 18 organizzazioni di Terzo settore impegnate nella costruzione delle politiche e selezionate attraverso la metodologia “a palla di neve”. Sono analizzate le disposizioni normative in materia di politica sociale, sia per la legislazione regionale che per quella comunale. L’analisi dei dati ha verificato l’ipotesi di ricerca nel senso che l’orientamento politico produce politiche differenti per quanto riguarda il rapporto tra Pubblica Amministrazione e Terzo settore. Per Modena si può parlare di una scelta di esternalizzazione dei servizi che si accompagna ad un processo di internalizzazione dei servizi tramite le ASP; a Verona almeno per alcuni settori delle politiche (disabilità e anziani) sono stati realizzati processi di sussidiarietà e di governance. Per la fase di programmazione l’orientamento politico ha meno influenza e la programmazione mostra caratteristiche di tipo “top-down”.
Resumo:
MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-down’ approach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.
Resumo:
People are daily faced with intertemporal choice, i.e., choices differing in the timing of their consequences, frequently preferring smaller-sooner rewards over larger-delayed ones, reflecting temporal discounting of the value of future outcomes. This dissertation addresses two main goals. New evidence about the neural bases of intertemporal choice is provided. Following the disruption of either the medial orbitofrontal cortex or the insula, the willingness to wait for larger-delayed outcomes is affected in odd directions, suggesting the causal involvement of these areas in regulating the value computation of rewards available with different timings. These findings were also supported by a reported imaging study. Moreover, this dissertation provides new evidence about how temporal discounting can be modulated at a behavioral level through different manipulations, e.g., allowing individuals to think about the distant time, pairing rewards with aversive events, or changing their perceived spatial position. A relationship between intertemporal choice, moral judgements and aging is also discussed. All these findings link together to support a unitary neural model of temporal discounting according to which signals coming from several cortical (i.e., medial orbitofrontal cortex, insula) and subcortical regions (i.e., amygdala, ventral striatum) are integrated to represent the subjective value of both earlier and later rewards, under the top-down regulation of dorsolateral prefrontal cortex. The present findings also support the idea that the process of outcome evaluation is strictly related to the ability to pre-experience and envision future events through self-projection, the anticipation of visceral feelings associated with receiving rewards, and the psychological distance from rewards. Furthermore, taking into account the emotions and the state of arousal at the time of decision seems necessary to understand impulsivity associated with preferring smaller-sooner goods in place of larger-later goods.
Resumo:
One of the main problems recognized in sustainable development goals and sustainable agricultural objectives is Climate change. Farming contributes significantly to the overall Greenhouse gases (GHG) in the atmosphere, which is approximately 10-12 percent of total GHG emissions, but when taking in consideration also land-use change, including deforestation driven by agricultural expansion for food, fiber and fuel the number rises to approximately 30 percent (Smith et. al., 2007). There are two distinct methodological approaches for environmental impact assessment; Life Cycle Assessment (a bottom up approach) and Input-Output Analysis (a top down approach). The two methodologies differ significantly but there is not an immediate choice between them if the scope of the study is on a sectorial level. Instead, as an alternative, hybrid approaches which combine these two approaches have emerged. The aim of this study is to analyze in a greater detail the agricultural sectors contribution to Climate change caused by the consumption of food products. Hence, to identify the food products that have the greatest impact through their life cycle, identifying their hotspots and evaluating the mitigation possibilities for the same. At the same time evaluating methodological possibilities and models to be applied for this purpose both on a EU level and on a country level (Italy).
Resumo:
The main areas of research of this thesis are Interference Management and Link-Level Power Efficiency for Satellite Communications. The thesis is divided in two parts. Part I tackles the problem of interference environments in satellite communications, and interference mitigation strategies, not just in terms of avoidance of the interferers, but also in terms of actually exploiting the interference present in the system as a useful signal. The analysis follows a top-down approach across different levels of investigation, starting from system level consideration on interference management, down to link-level aspects and to intra-receiver design. Interference Management techniques are proposed at all the levels of investigation, with interesting results. Part II is related to efficiency in the power domain, for instance in terms of required Input Back-off at the power amplifiers, which can be an issue for waveform based on linear modulations, due to their varying envelope. To cope with such aspects, an analysis is carried out to compare linear modulation with waveforms based on constant envelope modulations. It is shown that in some scenarios, constant envelope waveforms, even if at lower spectral efficiency, outperform linear modulation waveform in terms of energy efficiency.
Resumo:
L’attuale rilevanza rappresentata dalla stretta relazione tra cambiamenti climatici e influenza antropogenica ha da tempo posto l’attenzione sull’effetto serra e sul surriscaldamento planetario così come sull’aumento delle concentrazioni atmosferiche dei gas climaticamente attivi, in primo luogo la CO2. Il radiocarbonio è attualmente il tracciante ambientale per eccellenza in grado di fornire mediante un approccio “top-down” un valido strumento di controllo per discriminare e quantificare il diossido di carbonio presente in atmosfera di provenienza fossile o biogenica. Ecco allora che ai settori applicativi tradizionali del 14C, quali le datazioni archeometriche, si affiancano nuovi ambiti legati da un lato al settore energetico per quanto riguarda le problematiche associate alle emissioni di impianti, ai combustibili, allo stoccaggio geologico della CO2, dall’altro al mercato in forte crescita dei cosiddetti prodotti biobased costituiti da materie prime rinnovabili. Nell’ambito del presente lavoro di tesi è stato quindi esplorato il mondo del radiocarbonio sia dal punto di vista strettamente tecnico e metodologico che dal punto di vista applicativo relativamente ai molteplici e diversificati campi d’indagine. E’ stato realizzato e validato un impianto di analisi basato sul metodo radiometrico mediante assorbimento diretto della CO2 ed analisi in scintillazione liquida apportando miglioramenti tecnologici ed accorgimenti procedurali volti a migliorare le performance del metodo in termini di semplicità, sensibilità e riproducibilità. Il metodo, pur rappresentando generalmente un buon compromesso rispetto alle metodologie tradizionalmente usate per l’analisi del 14C, risulta allo stato attuale ancora inadeguato a quei settori applicativi laddove è richiesta una precisione molto puntuale, ma competitivo per l’analisi di campioni moderni ad elevata concentrazione di 14C. La sperimentazione condotta su alcuni liquidi ionici, seppur preliminare e non conclusiva, apre infine nuove linee di ricerca sulla possibilità di utilizzare questa nuova classe di composti come mezzi per la cattura della CO2 e l’analisi del 14C in LSC.
Resumo:
The present dissertation aims at analyzing the construction of American adolescent culture through teen-targeted television series and the shift in perception that occurs as a consequence of the translation process. In light of the recent changes in television production and consumption modes, largely caused by new technologies, this project explores the evolution of Italian audiences, focusing on fansubbing (freely distributed amateur subtitles made by fans for fan consumption) and social viewing (the re-aggregation of television consumption based on social networks and dedicated platforms, rather than on physical presence). These phenomena are symptoms of a sort of ‘viewership 2.0’ and of a new type of active viewing, which calls for a revision of traditional AVT strategies. Using a framework that combines television studies, new media studies, and fandom studies with an approach to AVT based on Descriptive Translation Studies (Toury 1995), this dissertation analyzes the non-Anglophone audience’s growing need to participation in the global dialogue and appropriation process based on US scheduling and informed by the new paradigm of convergence culture, transmedia storytelling, and affective economics (Jenkins 2006 and 2007), as well as the constraints intrinsic to multimodal translation and the different types of linguistic and cultural adaptation performed through dubbing (which tends to be more domesticating; Venuti 1995) and fansubbing (typically more foreignizing). The study analyzes a selection of episodes from six of the most popular teen television series between 1990 and 2013, which has been divided into three ages based on the different modes of television consumption: top-down, pre-Internet consumption (Beverly Hills, 90210, 1990 – 2000), emergence of audience participation (Buffy the Vampire Slayer, 1997 – 2003; Dawson’s Creek, 1998 – 2003), age of convergence and Viewership 2.0 (Gossip Girl, 2007 – 2012; Glee, 2009 – present; The Big Bang Theory, 2007 - present).
Resumo:
This Thesis is composed of a collection of works written in the period 2019-2022, whose aim is to find methodologies of Artificial Intelligence (AI) and Machine Learning to detect and classify patterns and rules in argumentative and legal texts. We define our approach “hybrid”, since we aimed at designing hybrid combinations of symbolic and sub-symbolic AI, involving both “top-down” structured knowledge and “bottom-up” data-driven knowledge. A first group of works is dedicated to the classification of argumentative patterns. Following the Waltonian model of argument and the related theory of Argumentation Schemes, these works focused on the detection of argumentative support and opposition, showing that argumentative evidences can be classified at fine-grained levels without resorting to highly engineered features. To show this, our methods involved not only traditional approaches such as TFIDF, but also some novel methods based on Tree Kernel algorithms. After the encouraging results of this first phase, we explored the use of a some emerging methodologies promoted by actors like Google, which have deeply changed NLP since 2018-19 — i.e., Transfer Learning and language models. These new methodologies markedly improved our previous results, providing us with best-performing NLP tools. Using Transfer Learning, we also performed a Sequence Labelling task to recognize the exact span of argumentative components (i.e., claims and premises), thus connecting portions of natural language to portions of arguments (i.e., to the logical-inferential dimension). The last part of our work was finally dedicated to the employment of Transfer Learning methods for the detection of rules and deontic modalities. In this case, we explored a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures.
Resumo:
Salient stimuli, like sudden changes in the environment or emotional stimuli, generate a priority signal that captures attention even if they are task-irrelevant. However, to achieve goal-driven behavior, we need to ignore them and to avoid being distracted. It is generally agreed that top-down factors can help us to filter out distractors. A fundamental question is how and at which stage of processing the rejection of distractors is achieved. Two circumstances under which the allocation of attention to distractors is supposed to be prevented are represented by the case in which distractors occur at an unattended location (as determined by the deployment of endogenous spatial attention) and when the amount of visual working memory resources is reduced by an ongoing task. The present thesis is focused on the impact of these factors on three sources of distraction, namely auditory and visual onsets (Experiments 1 and 2, respectively) and pleasant scenes (Experiment 3). In the first two studies we recorded neural correlates of distractor processing (i.e., Event-Related Potentials), whereas in the last study we used interference effects on behavior (i.e., a slowing down of response times on a simultaneous task) to index distraction. Endogenous spatial attention reduced distraction by auditory stimuli and eliminated distraction by visual onsets. Differently, visual working memory load only affected the processing of visual onsets. Emotional interference persisted even when scenes occurred always at unattended locations and when visual working memory was loaded. Altogether, these findings indicate that the ability to detect the location of salient task-irrelevant sounds and identify the affective significance of natural scenes is preserved even when the amount of visual working memory resources is reduced by an ongoing task and when endogenous attention is elsewhere directed. However, these results also indicate that the processing of auditory and visual distractors is not entirely automatic.