10 resultados para periodicity fluctuation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This research argues for an analysis of textual and cultural forms in the American horror film (1968- 1998), by defining the so-called postmodern characters. The “postmodern” term will not mean a period of the history of cinema, but a series of forms and strategies recognizable in many American films. From a bipolar re-mediation and cognitive point of view, the postmodern phenomenon is been considered as a formal and epistemological re-configuration of the cultural “modern” system. The first section of the work examines theoretical problems about the “postmodern phenomenon” by defining its cultural and formal constants in different areas (epistemology, economy, mass-media): the character of convergence, fragmentation, manipulation and immersion represent the first ones, while the “excess” is the morphology of the change, by realizing the “fluctuation” of the previous consolidated system. The second section classifies the textual and cultural forms of American postmodern film, generally non-horror. The “classic narrative” structure – coherent and consequent chain of causal cues toward a conclusion – is scattered by the postmodern constant of “fragmentation”. New textual models arise, fragmenting the narrative ones into the aggregations of data without causal-temporal logics. Considering the process of “transcoding”1 and “remediation”2 between media, and the principle of “convergence” in the phenomenon, the essay aims to define these structures in postmodern film as “database forms” and “navigable space forms.” The third section applies this classification to American horror film (1968-1998). The formal constant of “excess” in the horror genre works on the paradigm of “vision”: if postmodern film shows a crisis of the “truth” in the vision, in horror movies the excess of vision becomes “hyper-vision” – that is “multiplication” of the death/blood/torture visions – and “intra-vision”, that shows the impossibility of recognizing the “real” vision from the virtual/imaginary. In this perspective, the textual and cultural forms and strategies of postmodern horror film are predominantly: the “database-accumulation” forms, where the events result from a very simple “remote cause” serving as a pretext (like in Night of the Living Dead); the “database-catalogue” forms, where the events follow one another displaying a “central” character or theme. In the first case, the catalogue syntagms are connected by “consecutive” elements, building stories linked by the actions of a single character (usually the killer), or connected by non-consecutive episodes about a general theme: examples of the first kind are built on the model of The Wizard of Gore; the second ones, on the films such as Mario Bava’s I tre volti della paura. The “navigable space” forms are defined: hyperlink a, where one universe is fluctuating between reality and dream, as in Rosemary’s Baby; hyperlink b (where two non-hierarchical universes are convergent, the first one real and the other one fictional, as in the Nightmare series); hyperlink c (where more worlds are separated but contiguous in the last sequence, as in Targets); the last form, navigable-loop, includes a textual line which suddenly stops and starts again, reflecting the pattern of a “loop” (as in Lost Highway). This essay analyses in detail the organization of “visual space” into the postmodern horror film by tracing representative patterns. It concludes by examining the “convergence”3 of technologies and cognitive structures of cinema and new media.
Resumo:
This work focuses on magnetohydrodynamic (MHD) mixed convection flow of electrically conducting fluids enclosed in simple 1D and 2D geometries in steady periodic regime. In particular, in Chapter one a short overview is given about the history of MHD, with reference to papers available in literature, and a listing of some of its most common technological applications, whereas Chapter two deals with the analytical formulation of the MHD problem, starting from the fluid dynamic and energy equations and adding the effects of an external imposed magnetic field using the Ohm's law and the definition of the Lorentz force. Moreover a description of the various kinds of boundary conditions is given, with particular emphasis given to their practical realization. Chapter three, four and five describe the solution procedure of mixed convective flows with MHD effects. In all cases a uniform parallel magnetic field is supposed to be present in the whole fluid domain transverse with respect to the velocity field. The steady-periodic regime will be analyzed, where the periodicity is induced by wall temperature boundary conditions, which vary in time with a sinusoidal law. Local balance equations of momentum, energy and charge will be solved analytically and numerically using as parameters either geometrical ratios or material properties. In particular, in Chapter three the solution method for the mixed convective flow in a 1D vertical parallel channel with MHD effects is illustrated. The influence of a transverse magnetic field will be studied in the steady periodic regime induced by an oscillating wall temperature. Analytical and numerical solutions will be provided in terms of velocity and temperature profiles, wall friction factors and average heat fluxes for several values of the governing parameters. In Chapter four the 2D problem of the mixed convective flow in a vertical round pipe with MHD effects is analyzed. Again, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the wall. A numerical solution is presented, obtained using a finite element approach, and as a result velocity and temperature profiles, wall friction factors and average heat fluxes are derived for several values of the Hartmann and Prandtl numbers. In Chapter five the 2D problem of the mixed convective flow in a vertical rectangular duct with MHD effects is discussed. As seen in the previous chapters, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the four walls. The numerical solution obtained using a finite element approach is presented, and a collection of results, including velocity and temperature profiles, wall friction factors and average heat fluxes, is provided for several values of, among other parameters, the duct aspect ratio. A comparison with analytical solutions is also provided, as a proof of the validity of the numerical method. Chapter six is the concluding chapter, where some reflections on the MHD effects on mixed convection flow will be made, in agreement with the experience and the results gathered in the analyses presented in the previous chapters. In the appendices special auxiliary functions and FORTRAN program listings are reported, to support the formulations used in the solution chapters.
Resumo:
STUDY OBJECTIVE: Cyclic Alternating Pattern (CAP) is a fluctuation of the arousal level during NREM sleep and consists of the alternation between two phases: phase A (divided into three subtypes A1, A2, and A3) and phase B. A1 is thought to be generated by the frontal cortex and is characterized by the presence of K complexes or delta bursts; additionally, CAP A1 seems to have a role in the involvement of sleep slow wave activity in cognitive processing. Our hypothesis was that an overall CAP rate would have a negative influence on cognitive performance due to excessive fluctuation of the arousal level during NREM sleep. However, we also predicted that CAP A1 would be positively correlated with cognitive functions, especially those related to frontal lobe functioning. For this reason, the objective of our study was to correlate objective sleep parameters with cognitive behavioral measures in normal healthy adults. METHODS: 8 subjects (4 males; 4 females; mean age 27.75 years, range 2334) were recruited for this study. Two nocturnal polysomnography (night 2 and 3 = N2 and N3) were carried out after a night of adaptation. A series of neuropsychological tests were performed by the subjects in the morning and afternoon of the second day (D2am; D2pm) and in the morning of the third day (D3am). Raw scores from the neuropsychological tests were used as dependent variables in the statistical analysis of the results. RESULTS: We computed a series of partial correlations between sleep microstructure parameters (CAP, A1, A2 and A3 rate) and a number of indices of cognitive functioning. CAP rate was positively correlated with visuospatial working memory (Corsi block test), Trial Making Test Part A (planning and motor sequencing) and the retention of words from the Hopkins Verbal Learning Test (HVLT). Conversely, CAP was negatively correlated with visuospatial fluency (Ruff Figure Fluency Test). CAP A1 were correlated with many of the tests of neuropsychological functioning, such as verbal fluency (as measured by the COWAT), working memory (as measured by the Digit Span – Backward test), and both delay recall and retention of the words from the HVLT. The same parameters were found to be negatively correlated with CAP A2 subtypes. CAP 3 were negatively correlated with the Trial Making Test Parts A and B. DISCUSSION: To our knowledge this is the first study indicating a role of CAP A1 and A2 on behavioral cognitive performance of healthy adults. The results suggest that high rate of CAP A1 might be related to an improvement whereas high rate of CAP A2 to a decline of cognitive functions. Further studies need to be done to better determine the role of the overall CAP rate and CAP A3 on cognitive behavioral performances.
Resumo:
The aim of my dissertation is to provide new knowledge and applications of microfluidics in a variety of problems, from materials science, devices, and biomedicine, where the control on the fluid dynamics and the local concentration of the solutions containing the relevant molecules (either materials, precursors, or biomolecules) is crucial. The control of interfacial phenomena occurring in solutions at dierent length scales is compelling in nanotechnology for devising new sensors, molecular electronics devices, memories. Microfluidic devices were fabricated and integrated with organic electronics devices. The transduction involves the species in the solution which infills the transistor channel and confined by the microfluidic device. This device measures what happens on the surface, at few nanometers from the semiconductor channel. Soft-lithography was adopted to fabricate platinum electrodes, starting from platinum carbonyl precursor. I proposed a simple method to assemble these nanostructures in periodic arrays of microstripes, and form conductive electrodes with characteristic dimension of 600 nm. The conductivity of these sub-microwires is compared with the values reported in literature and bulk platinum. The process is suitable for fabricating thin conductive patterns for electronic devices or electrochemical cells, where the periodicity of the conductive pattern is comparable with the diusion length of the molecules in solution. The ordering induced among artificial nanostructures is of particular interest in science. I show that large building blocks, like carbon nanotubes or core-shell nanoparticles, can be ordered and self-organised on a surface in patterns due to capillary forces. The eective probability of inducing order with microfluidic flow is modeled with finite element calculation on the real geometry of the microcapillaries, in soft-lithographic process. The oligomerization of A40 peptide in microconfined environment represents a new investigation of the extensively studied peptide aggregation. The added value of the approach I devised is the precise control on the local concentration of peptides together with the possibility to mimick cellular crowding. Four populations of oligomers where distinguished, with diameters ranging from 15 to 200 nm. These aggregates could not be addresses separately in fluorescence. The statistical analysis on the atomic force microscopy images together with a model of growth reveal new insights on the kinetics of amyloidogenesis as well as allows me to identify the minimum stable nucleus size. This is an important result owing to its implications in the understanding and early diagnosis and therapy of the Alzheimer’s disease
Resumo:
Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.
Resumo:
Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.
Resumo:
Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.
Resumo:
Modern Internal Combustion Engines are becoming increasingly complex in terms of their control systems and strategies. The growth of the algorithms’ complexity results in a rise of the number of on-board quantities for control purposes. In order to improve combustion efficiency and, simultaneously, limit the amount of pollutant emissions, the on-board evaluation of two quantities in particular has become essential; namely indicated torque produced by the engine and the angular position where 50% of fuel mass injected over an engine cycle is burned (MFB50). The above mentioned quantities can be evaluated through the measurement of in-cylinder pressure. Nonetheless, at the time being, the installation of in-cylinder pressure sensors on vehicles is extremely uncommon mainly because of measurement reliability and costs. This work illustrates a methodological approach for the estimation of indicated torque and MFB50 that is based on the engine speed fluctuation measurement. This methodology is compatible with the typical on-board application restraints. Moreover, it requires no additional costs since speed can be measured using the system already mounted on the vehicle, which is made of a magnetic pick-up faced to a toothed wheel. The estimation algorithm consists of two main parts: first, the evaluation of indicated torque fluctuation based on speed measurement and secondly, the evaluation of the mean value of the indicated torque (over an engine cycle) and MFB50 by using the relationship with the indicated torque harmonic and other engine quantities. The procedure has been successfully applied to an L4 turbocharged Diesel engine mounted on-board a vehicle.
Resumo:
La tesi tematizza come proprio oggetto di indagine i percorsi di partecipazione politica e civica dei giovani nei contesti di transizione alla vita adulta, concentrandosi sull’influenza delle relazioni tra generazioni su tali espressioni di coinvolgimento. L’approfondimento empirico consiste in una ricerca qualitativa condotta presso il quartiere Navile di Bologna nel 2012. Basandosi sull’approccio metodologico della grounded theory, essa ha coinvolto un campione di giovani e un campione di adulti per loro significativi attraverso interviste semistrutturate. Dall’analisi emerge una rilevante disaffezione giovanile nei confronti della politica che, tuttavia, non traduce in un rifiuto del coinvolgimento, ma in una “partecipazione con riserva” espressa attraverso atteggiamenti tutt’altro che passivi nei confronti della politica formale - basati sulla logica della riforma, della resistenza o della ribellione - e mediante un forte investimento in attività partecipative non convenzionali (associazionismo e coinvolgimento). A fare da sfondo all’interesse partecipativo dei giovani si colloca una lettura negativa della propria condizione presente ed un conseguente conflitto intergenerazionale piuttosto manifesto, che si riflette sulle stesse modalità di attivazione. La politica, nelle sue espressioni più strettamente formali, viene interpretata come un ‘territorio adulto’, gestito secondo logiche che lasciano poco spazio ai giovani i quali, per tale ragione, scelgono di attivarsi secondo modalità alternative in cui il confronto con l’altro, quando presente, avviene prevalentemente tra pari o su basi avvertite come più paritarie. Il distanziamento dei giovani dalla politica formale riflette quindi una parallela presa di distanza dagli adulti, i quali risultano smarriti nello svolgimento delle loro funzioni di modello e di riconoscimento. La loro ambivalenza rispetto ai giovani - ossia il continuo oscillare tra il profondo pessimismo e il cieco ottimismo, tra la guida direttiva e la deresponsabilizzazione - si traduce in un riconoscimento parziale delle reali potenzialità ed esigenze dei giovani come cittadini ed adulti emergenti.
Resumo:
Virgin olive oil(VOO) is a product characterized by high economic and nutritional values, because of its superior sensory characteristics and minor compounds (phenols and tocopherols) contents. Since the original quality of VOO may change during its storage, this study aimed to investigate the influence of different storage and shipment conditions on the quality of VOO, by studying different solutions such as filtration, dark storage and shipment inside insulated containers to protect it. Different analytical techniques were used to follow-up the quality changes during virgin olive oil storage and simulated shipments, in terms of basic quality parameters, sensory analysis and evaluation of minor components (phenolic compounds, diglycerides, volatile compounds). Four main research streams were presented in this PhD thesis: The results obtained from the first experimental section revealed that the application of filtration and/or clarification can decrease the unavoidable quality loss of the oil samples during storage, in comparison with unfiltered oil samples. The second section indicated that the virgin olive oil freshness, evaluated by diglycerides content, was mainly affected by the storage time and temperature. The third section revealed that fluctuation in temperature during storage may adversely affect the virgin olive oil quality, in terms of hydrolytic rancidity and oxidation quality. The fourth section showed that virgin olive oil shipped inside insulated containers showed lower hydrolytic and oxidation degradation than those without insulation cover. Overall, this PhD thesis highlighted that application of adequate treatment, such as filtration or clarification, in addition to a good protection against other external variables, such as temperature and light, will improve the stability of virgin olive oil during storage.