20 resultados para Performance evolution due time
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Increasingly stringent exhaust emission limits and higher fuel economy are the main drivers of the engine development process. As a consequence, the complexity of the propulsion units and its subsystems increase, due to the extensive use of sensors and actuators needed to obtain a precise control over the combustion phase. Since engine calibration process consumes most of the development time, new tools and methodologies are needed to shorten the development time and increase the performance attainable. Real time combustion analysis, based on the in-cylinder pressure signal, can significantly improve the calibration of the engine control strategies and the development of new algorithms, giving instantaneous feedback on the engine behavior. A complete combustion analysis and diagnosis system has been developed, capable of evaluating the most important indicators about the combustion process, such as indicated mean effective pressure, heat release, mass fraction burned and knock indexes. Such a tool is built on top of a flexible, modular and affordable hardware platform, capable of satisfying the requirements needed for accuracy and precision, but also enabling the use directly on-board the vehicle, due to its small form factor.
Resumo:
La Tesi analizza le relazioni tra i processi di sviluppo agricolo e l’uso delle risorse naturali, in particolare di quelle energetiche, a livello internazionale (paesi in via di sviluppo e sviluppati), nazionale (Italia), regionale (Emilia Romagna) e aziendale, con lo scopo di valutare l’eco-efficienza dei processi di sviluppo agricolo, la sua evoluzione nel tempo e le principali dinamiche in relazione anche ai problemi di dipendenza dalle risorse fossili, della sicurezza alimentare, della sostituzione tra superfici agricole dedicate all’alimentazione umana ed animale. Per i due casi studio a livello macroeconomico è stata adottata la metodologia denominata “SUMMA” SUstainability Multi-method, multi-scale Assessment (Ulgiati et al., 2006), che integra una serie di categorie d’impatto dell’analisi del ciclo di vita, LCA, valutazioni costi-benefici e la prospettiva di analisi globale della contabilità emergetica. L’analisi su larga scala è stata ulteriormente arricchita da un caso studio sulla scala locale, di una fattoria produttrice di latte e di energia elettrica rinnovabile (fotovoltaico e biogas). Lo studio condotto mediante LCA e valutazione contingente ha valutato gli effetti ambientali, economici e sociali di scenari di riduzione della dipendenza dalle fonti fossili. I casi studio a livello macroeconomico dimostrano che, nonostante le politiche di supporto all’aumento di efficienza e a forme di produzione “verdi”, l’agricoltura a livello globale continua ad evolvere con un aumento della sua dipendenza dalle fonti energetiche fossili. I primi effetti delle politiche agricole comunitarie verso una maggiore sostenibilità sembrano tuttavia intravedersi per i Paesi Europei. Nel complesso la energy footprint si mantiene alta poiché la meccanizzazione continua dei processi agricoli deve necessariamente attingere da fonti energetiche sostitutive al lavoro umano. Le terre agricole diminuiscono nei paesi europei analizzati e in Italia aumentando i rischi d’insicurezza alimentare giacché la popolazione nazionale sta invece aumentando.
Resumo:
In this research work the optimization of the electrochemical system of LDHs as catalytic precursors on FeCrAlY foams was carried out. Preliminary sintheses were performed on flat surfaces in order to easily characterize the deposited material. From the study of pH evolution vs time at different cathodic potentials applied to a Pt electrode, the theoretical best working conditions for the synthesis of single hydroxides and LDH compounds was achieved. In order to define the optimal potential for the synthesis of a particular LDH compound, the collected data were compared with the interval of precipitation determined by titration with NaOH. However, the characterization of the deposited material on Pt surfaces did not confirm the deposition of a pure and homogeneous LDH phase during the synthesis. Instead a sequential deposition linked to the pH of precipitation of the involved elements is observed. The same behavior was observed during the synthesis of the RhMgAl LDH on FeCrAlY foam as catalytic precursor. Several parameters were considered in order to optimize the synthesis.. The development of electrochemical cells with different feature, such as the counter electrode dimensions or the contact between the foam and the potentiostat, had been carried out in order to obtain a better coating of the foam. The influence of the initial pH of the electrolyte solution, of the applied potential, of the composition of the electrolytic solution were investigated in order to improve a better coating of the catalyst support. Catalytic tests were performed after the calcination of the deposited foam for the CPO and SR reactions, showing an improve of performances along with optimization of the precursors synthesis conditions.
Resumo:
The analysis of the K(892)*0 resonance production in Pb–Pb collisions at √sNN = 2.76 TeV with the ALICE detector at the LHC is presented. The analysis is motivated by the interest in the measurement of short-lived resonances production that can provide insights on the properties of the medium produced in heavy-ion collisions both during its partonic (Quark-Gluon Plasma) and hadronic phase. This particular analysis exploits particle identification of the ALICE Time-Of-Flight detector. The ALICE experiment is presented, with focus on the performance of the Time-Of-Flight system. The aspects of calibration and data quality controls are discussed in detail, while illustrating the excellent and very stable performance of the system in different collision environments at the LHC. A full analysis of the K*0 resonance production is presented: from the resonance reconstruction to the determination of the efficiency and the systematic uncertainty. The results show that the analysis strategy discussed is a valid tool to measure the K∗0 up to intermediate momenta. Preliminary results on K*0 resonance production at the LHC are presented and confirmed to be a powerful tool to study the physics of ultra-relativistic heavy-ion collisions.
Resumo:
In the first part of the thesis, we propose an exactly-solvable one-dimensional model for fermions with long-range p-wave pairing decaying with distance as a power law. We studied the phase diagram by analyzing the critical lines, the decay of correlation functions and the scaling of the von Neumann entropy with the system size. We found two gapped regimes, where correlation functions decay (i) exponentially at short range and algebraically at long range, (ii) purely algebraically. In the latter the entanglement entropy is found to diverge logarithmically. Most interestingly, along the critical lines, long-range pairing breaks also the conformal symmetry. This can be detected via the dynamics of entanglement following a quench. In the second part of the thesis we studied the evolution in time of the entanglement entropy for the Ising model in a transverse field varying linearly in time with different velocities. We found different regimes: an adiabatic one (small velocities) when the system evolves according the instantaneous ground state; a sudden quench (large velocities) when the system is essentially frozen to its initial state; and an intermediate one, where the entropy starts growing linearly but then displays oscillations (also as a function of the velocity). Finally, we discussed the Kibble-Zurek mechanism for the transition between the paramagnetic and the ordered phase.
Resumo:
In recent decades, Organic Thin Film Transistors (OTFTs) have attracted lots of interest due to their low cost, large area and flexible properties which have brought them to be considered the building blocks of the future organic electronics. Experimentally, devices based on the same organic material deposited in different ways, i.e. by varying the deposition rate of the molecules, show different electrical performance. As predicted theoretically, this is due to the speed and rate by which charge carriers can be transported by hopping in organic thin films, transport that depends on the molecular arrangement of the molecules. This strongly suggests a correlation between the morphology of the organic semiconductor and the performance of the OTFT and hence motivated us to carry out an in-situ real time SPM study of organic semiconductor growth as an almost unprecedent experiment with the aim to fully describe the morphological evolution of the ultra-thin film and find the relevant morphological parameters affecting the OTFT electrical response. For the case of 6T on silicon oxide, we have shown that the growth mechanism is 2D+3D, with a roughening transition at the third layer and a rapid roughening. Relevant morphological parameters have been extracted by the AFM images. We also developed an original mathematical model to estimate theoretically and more accurately than before, the capacitance of an EFM tip in front of a metallic substrate. Finally, we obtained Ultra High Vacuum (UHV) AFM images of 6T at lying molecules layer both on silicon oxide and on top of 6T islands. Moreover, we performed ex-situ AFM imaging on a bilayer film composed of pentacene (a p-type semiconductor) and C60 (an n-type semiconductor).
Resumo:
Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.
Resumo:
This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
My project explores and compares different forms of gender performance in contemporary art and visual culture according to a perspective centered on photography. Thanks to its attesting power this medium can work as a ready-made. In fact during the 20th century it played a key role in the cultural emancipation of the body which (using a Michel Foucault’s expression) has now become «the zero point of the world». Through performance the body proves to be a living material of expression and communication while photography ensures the recording of any ephemeral event that happens in time and space. My questioning approach considers the gender constructed imagery from the 1990s to the present in order to investigate how photography’s strong aura of realism promotes and allows fantasies of transformation. The contemporary fascination with gender (especially for art and fashion) represents a crucial issue in the global context of postmodernity and is manifested in a variety of visual media, from photography to video and film. Moreover the internet along with its digital transmission of images has deeply affected our world (from culture to everyday life) leading to a postmodern preference for performativity over the more traditional and linear forms of narrativity. As a consequence individual borders get redefined by the skin itself which (dissected through instant vision) turns into a ductile material of mutation and hybridation in the service of identity. My critical assumptions are taken from the most relevant changes occurred in philosophy during the last two decades as a result of the contributions by Jacques Lacan, Michel Foucault, Jacques Derrida, Gilles Deleuze who developed a cross-disciplinary and comparative approach to interpret the crisis of modernity. They have profoundly influenced feminist studies so that the category of gender has been reassessed in contrast with sex (as a biological connotation) and in relation to history, culture, society. The ideal starting point of my research is the year 1990. I chose it as the approximate historical moment when the intersection of race, class and gender were placed at the forefront of international artistic production concerned with identity, diversity and globalization. Such issues had been explored throughout the 1970s but it was only from the mid-1980s onward that they began to be articulated more consistently. Published in 1990, the book "Gender trouble: feminism and the subversion of identity" by Judith Butler marked an important breakthrough by linking gender to performance as well as investigating the intricate connections between theory and practice, embodiment and representation. It inspired subsequent research in a variety of disciplines, art history included. In the same year Teresa de Lauretis launched the definition of queer theory to challenge the academic perspective in gay and lesbian studies. In the meantime the rise of Third Wave Feminism in the US introduced a racially and sexually inclusive vision over the global situation in order to reflect on subjectivity, new technologies and popular culture in connection with gender representation. These conceptual tools have enabled prolific readings of contemporary cultural production whether fine arts or mass media. After discussing the appropriate framework of my project and taking into account the postmodern globalization of the visual, I have turned to photography to map gender representation both in art and in fashion. Therefore I have been creating an archive of images around specific topics. I decided to include fashion photography because in the 1990s this genre moved away from the paradigm of an idealized and classical beauty toward a new vernacular allied with lifestyles, art practices, pop and youth culture; as one might expect the dominant narrative modes in fashion photography are now mainly influenced by cinema and snapshot. These strategies originate story lines and interrupted narratives using models’ performance to convey a particular imagery where identity issues emerge as an essential part of fashion spectacle. Focusing on the intersections of gender identities with socially and culturally produced identities, my approach intends to underline how the fashion world has turned to current trends in art photography and in some case turned to the artists themselves. The growing fluidity of the categories that distinguish art from fashion photography represents a particularly fruitful moment of visual exchange. Varying over time the dialogue between these two fields has always been vital; nowadays it can be studied as a result of this close relationship between contemporary art world and consumer culture. Due to the saturation of postmodern imagery the feedback between art and fashion has become much more immediate and then increasingly significant for anyone who wants to investigate the construction of gender identity through performance. In addition to that a lot of magazines founded in the 1990s bridged the worlds of art and fashion because some of their designers and even editors were art-school graduates encouraging innovation. The inclusion of art within such magazines aimed at validating them as a form of art in themselves supporting a dynamic intersection for music, fashion, design and youth culture: an intersection that also contributed to create and spread different gender stereotypes. This general interest in fashion produced many exhibitions of and about fashion itself at major international venues such as the Victoria and Albert Museum in London, the Metropolitan Museum of Art and the Solomon R. Guggenheim Museum in New York. Since then this celebrated success of fashion has been regarded as a typical element of postmodern culture. Owing to that I have also based my analysis on some important exhibitions dealing with gender performance like "Féminin-Masculin" at the Centre Pompidou of Paris (1995), "Rrose is a Rrose is a Rrose. Gender performance in photography" at the Solomon R. Guggenheim Museum of New York (1997), "Global Feminisms" at the Brooklyn Museum (2007), "Female Trouble" at the Pinakothek der Moderne in München together with the workshops dedicated to "Performance: gender and identity" in June 2005 at the Tate Modern of London. Since 2003 in Italy we have had Gender Bender - an international festival held annually in Bologna - to explore the gender imagery stemming from contemporary culture. In few days this festival offers a series of events ranging from visual arts, performance, cinema, literature to conferences and music. Being aware that any method of research is neither race nor gender neutral I have traced these critical paths to question gender identity in a multicultural perspective taking account of the political implications too. In fact, if visibility may be equated with exposure, we can also read these images as points of intersection of visibility with social power. Since gender assignations rely so heavily on the visual, the postmodern dismantling of gender certainty through performance has wide-ranging effects that need to be analyzed. In some sense this practice can even contest the dominance of visual within postmodernism. My visual map in contemporary art and fashion photography includes artists like Nan Goldin, Cindy Sherman, Hellen van Meene, Rineke Dijkstra, Ed Templeton, Ryan McGinley, Anne Daems, Miwa Yanagi, Tracey Moffat, Catherine Opie, Tomoko Sawada, Vanessa Beecroft, Yasumasa Morimura, Collier Schorr among others.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
This thesis deals with the development of the upcoming aeronautical mobile airport communications system (AeroMACS) system. We analyzed the performance of AeroMACS and we investigated potential solutions for enhancing its performance. Since the most critical results correspond to the channel scenario having less diversity1, we tackled this problem investigating potential solutions for increasing the diversity of the system and therefore improving its performance. We accounted different forms of diversity as space diversity and time diversity. More specifically, space (antenna and cooperative) diversity and time diversity are analyzed as countermeasures for the harsh fading conditions that are typical of airport environments. Among the analyzed techniques, two novel concepts are introduced, namely unequal diversity coding and flexible packet level codes. The proposed techniques have been analyzed on a novel airport channel model, derived from a measurement campaign at the airport of Munich (Germany). The introduced techniques largely improve the performance of the conventional AeroMACS link; representing thus appealing solutions for the long term evolution of the system.