904 resultados para Turn Around Time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to analyse the degree of completeness of world inventory of the mite family Phytoseiidae and the factors that might determine the process of species description. The world data set includes 2,122 valid species described from 1839 to 2010. Species accumulation curves were analysed. The effect of localisation (latitude ranges) and body size on the species description patterns over space and time was assessed. A low proportion of species seems remain to be described, but this trend could be explained by a critical reduction in the number of specialists dedicated to the study of those mites. In addition, this trend refers to the areas where phytoseiids have been well studied around the world, and it may change considerably if the study of these mites would be intensified in some areas. The number of newly described species is lower near the tropics, and their body size is also smaller. Differences in body size were noted between the three sub-families of Phytoseiidae, the highest mean body lengths of adult females being observed for Amblyseiinae, the most diverse family. In the future, collections would have certainly to take into consideration such conclusions for instance in using more adequate optical equipment especially for field collections. The decrease in the number of phytoseiid mite described was confirmed and the factors that could explain such a trend are discussed. Information for improving further inventories is provided and discussed, especially in relation to sampling localization and study methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two hundred eighty-eight 32-wk-old Hisex White laying hens were used in this research during a 10 weeks period, arranged in a 2 x 5 completely randomized factorial design, with three replicates of eight birds per treatment. Two groups: fish oil (OP) and Marine Algae (AM) with five DHA levels (120, 180, 240, 300 and 360 mg/100 g diet) were assigned including two control groups birds fed corn and soybean basal diet (CON) and a diet supplemented with AM (AM420) to study the effect of time 0, 2, 4, 6 and 8 weeks (wk) on the efficiency of egg yolk fatty acid enrichment. The means varied (p<0.01) of 17.63% (OP360) to 22.08% (AM420) is the total Polyunsaturated Fatty Acids (PUFAs) and 45.8 mg/g (OP360), 40.37 mg/g (OP360, 4 wk) to 65.82 mg/g (AM420) and 68.79 mg/g/yolk (AM120, 8 wk) for n-6 PUFAs. On the influence of sources and levels in the times, the means of n-3 PUFAs increased by 5.58 mg/g (AM120, 2 wk) to 14.16 mg/g (OP360, 6 wk) when compared to average of 3.34 mg PUFAs Ω/g/yolk (CON). Usually, the means DHA also increased from 22.34 (CON) to 176.53 mg (μ, OP360), 187.91 mg (OP360, 8 wk) and 192.96 mg (OP360, 6 wk) and 134.18 mg (μ, OP360), 135.79 mg (AM420, 6 wk), 149.75 mg DHA (AM420, 8 wk) per yolk. The opposite was observed for the means AA, so the effect of the sources, levels and times, decreased (P <0.01) of 99.83 mg (CON) to 31.99 mg (OP360, 4 wk), 40.43 mg (μ, OP360) to 61.21 mg (AM420) and 71.51 mg AA / yolk (μ, AM420). Variations of the average weight of 15.75g (OP360) to 17.08g (AM420) yolks of eggs de 32.55% (AM420) to 34.08% (OP360) of total lipids and 5.28 g (AM240) to 5.84 g (AM120) of fat in the yolk were not affected (p>0.05) by treatments, sources, levels and times studied. Starting of 2 week, the hens increased the level of n-3 PUFAs in the egg yolks, being expressively increased (p<0.01) until 4 weeks, which after the increased levels of n-3 PUFAs tended to if stabilize around of time of 8 experimental weeks, when it was more effective saturation of the tissues and yolk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vertical profile of aerosol in the planetary boundary layer of the Milan urban area is studied in terms of its development and chemical composition in a high-resolution modelling framework. The period of study spans a week in summer of 2007 (12-18 July), when continuous LIDAR measurements and a limited set of balloon profiles were collected in the frame of the ASI/QUITSAT project. LIDAR observations show a diurnal development of an aerosol plume that lifts early morning surface emissions to the top of the boundary layer, reaching maximum concentration around midday. Mountain breeze from Alps clean the bottom of the aerosol layer, typically leaving a residual layer at around 1500-2000 m which may survive for several days. During the last two days under analysis, a dust layer transported from Sahara reaches the upper layers of Milan area and affects the aerosol vertical distribution in the boundary layer. Simulation from the MM5/CHIMERE modelling system, carried out at 1 km horizontal resolution, qualitatively reproduced the general features of the Milan aerosol layer observed with LIDAR, including the rise and fall of the aersol plume, the residual layer in altitude and the Saharan dust event. The simulation highlighted the importance of nitrates and secondary organics in its composition. Several sensitivity tests showed that main driving factors leading to the dominance of nitrates in the plume are temperature and gas absorption process. A modelling study turn to the analysis of the vertical aerosol profiles distribution and knowledge of the characterization of the PM at a site near the city of Milan is performed using a model system composed by a meteorological model MM5 (V3-6), the mesoscale model from PSU/NCAR and a Chemical Transport Model (CTM) CHIMERE to simulate the vertical aerosol profile. LiDAR continuous observations and balloon profiles collected during two intensive campaigns in summer 2007 and in winter 2008 in the frame of the ASI/QUITSAT project have been used to perform comparisons in order to evaluate the ability of the aerosol chemistry transport model CHIMERE to simulate the aerosols dynamics and compositions in this area. The comparisons of model aerosols with measurements are carried out over a full time period between 12 July 2007 and 18 July 2007. The comparisons demonstrate the ability of the model to reproduce correctly the aerosol vertical distributions and their temporal variability. As detected by the LiDAR, the model during the period considered, predicts a diurnal development of a plume during the morning and a clearing during the afternoon, typically the plume reaches the top of the boundary layer around mid day, in this time CHIMERE produces highest concentrations in the upper levels as detected by LiDAR. The model, moreover can reproduce LiDAR observes enhancement aerosols concentrations above the boundary layer, attributing the phenomena to dust out intrusion. Another important information from the model analysis regard the composition , it predicts that a large part of the plume is composed by nitrate, in particular during 13 and 16 July 2007 , pointing to the model tendency to overestimates the nitrous component in the particular matter vertical structure . Sensitivity study carried out in this work show that there are a combination of different factor which determine the major nitrous composition of the “plume” observed and in particular humidity temperature and the absorption phenomena are the mainly candidate to explain the principal difference in composition simulated in the period object of this study , in particular , the CHIMERE model seems to be mostly sensitive to the absorption process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work focused mainly on two aspects of kinetics of phase separation in binary mixtures. In the first part, we studied the interplay of hydrodynamics and the phase separation of binary mixtures. A considerably flat container (a laterally extended geometry), at an aspect ratio of 14:1 (diameter: height) was chosen, so that any hydrodynamic instabilities, if they arise, could be tracked. Two binary mixtures were studied. One was a mixture of methanol and hexane, doped with 5% ethanol, which phase separated under cooling. The second was a mixture of butoxyethanol and water, doped with 2% decane, which phase separated under heating. The dopants were added to bring down the phase transition temperature around room temperature.rnrnAlthough much work has been done already on classical hydrodynamic instabilities, not much has been done in the understanding of the coupling between phase separation and hydrodynamic instabilities. This work aimed at understanding the influence of phase separation in initiating any hydrodynamic instability, and also vice versa. Another aim was to understand the influence of the applied temperature protocol on the emergence of patterns characteristic to hydrodynamic instabilities. rnrnOn slowly cooling the system continuously, at specific cooling rates, patterns were observed in the first mixture, at the start of phase separation. They resembled the patterns observed in classical Rayleigh-Bénard instability, which arises when a liquid continuously is heated from below. To suppress this classical convection, the cooling setup was tuned such that the lower side of the sample always remained cooler by a few millikelvins, relative to the top. We found that the nature of patterns changed with different cooling rates, with stable patterns appearing for a specific cooling rate (1K/h). On the basis of the cooling protocol, we estimated a modified Rayleigh number for our system. We found that the estimated modified Rayleigh number is near the critical value for instability, for cooling rates between 0.5K/h and 1K/h. This is consistent with our experimental findings. rnrnThe origin of the patterns, in spite of the lower side being relatively colder with respect to the top, points to two possible reasons. 1) During phase separation droplets of either phases are formed, which releases a latent heat. Our microcalorimetry measurements show that the rise in temperature during the first phase separation is in the order of 10-20millikelvins, which in some cases is enough to reverse the applied temperature bias. Thus phase separation in itself initiates a hydrodynamic instability. 2) The second reason comes from the cooling protocol itself. The sample was cooled from above and below. At sufficiently high cooling rates, there are situations where the interior of the sample is relatively hotter than both top and bottom of the sample. This is sufficient to create an instability within the cell. Our experiments at higher cooling rates (5K/h and above) show complex patterns, which hints that there is enough convection even before phase separation occurs. Infact, theoretical work done by Dr.Hayase show that patterns could arise in a system without latent heat, with symmetrical cooling from top and bottom. The simulations also show that the patterns do not span the entire height of the sample cell. This is again consistent with the cell sizes measured in our experiment.rnrnThe second mixture also showed patterns at specific heating rates, when it was continuously heated inducing phase separation. In this case though, the sample was turbid for a long time until patterns appeared. A meniscus was most probably formed before the patterns emerged. We attribute the reason of patterns in this case to Marangoni convection, which is present in systems with an interface, where local differences in surface tension give rise to an instability. Our estimates for the Rayleigh number also show a significantly lower number than that's required for RB-type instability.rnrnIn the first part of the work, therefore, we identify two different kinds of hydrodynamic instabilities in two different mixtures. Both are observed during, or after the first phase separation. Our patterns compare with the classical convection patterns, but here the origins are from phase separation and the cooling protocol.rnrnIn the second part of the work, we focused on the kinetics of phase separation in a polymer solution (polystyrene and methylcyclohexane), which is cooled continuously far down into the two phase region. Oscillations in turbidity, denoting material exchange between the phases are seen. Three processes contribute to the phase separation: Nucleation of droplets, their growth and coalescence, and their subsequent sedimentation. Experiments in low molecular binary mixtures had led to models of oscillation [43] which considered sedimentation time scales much faster than the time scales of nucleation and growth. The size and shape of the sample therefore did not matter in such situations. The oscillations in turbidity were volume-dominated. The present work aimed at understanding the influence of sedimentation time scales for polymer mixtures. Three heights of the sample with same composition were studied side by side. We found that periods increased with the sample height, thus showing that sedimentation time determines the period of oscillations in the polymer solutions. We experimented with different cooling rates and different compositions of the mixture, and we found that periods are still determined by the sample height, and therefore by sedimentation time. rnrnWe also see that turbidity emerges in two ways; either from the interface, or throughout the sample. We suggest that oscillations starting from the interface are due to satellite droplets that are formed on droplet coalescence at the interface. These satellite droplets are then advected to the top of the sample, and they grow, coalesce and sediment. This type of an oscillation wouldn't require the system to pass the energy barrier required for homogenous nucleation throughout the sample. This mechanism would work best in sample where the droplets could be effectively advected throughout the sample. In our experiments, we see more interface dominated oscillations in the smaller cells and lower cooling rates, where droplet advection is favourable. In larger samples and higher cooling rates, we mostly see that the whole sample becomes turbid homogenously, which requires the system to pass the energy barrier for homogenous nucleation.rnrnOscillations, in principle, occur since the system needs to pass an energy barrier for nucleation. The height of the barrier decreases with increasing supersaturation, which in turn is from the temperature ramp applied. This gives rise to a period where the system is clear, in between the turbid periods. At certain specific cooling rates, the system can follow a path such that the start of a turbid period coincides with the vanishing of the last turbid period, thus eliminating the clear periods. This means suppressions of oscillations altogether. In fact we experimentally present a case where, at a certain cooling rate, oscillations indeed vanish. rnrnThus we find through this work that the kinetics of phase separation in polymer solution is different from that of a low molecular system; sedimentation time scales become relevant, and therefore so does the shape and size of the sample. The role of interface in initiating turbid periods also become much more prominent in this system compared to that in low molecular mixtures.rnrnIn summary, some fundamental properties in the kinetics of phase separation in binary mixtures were studied. While the first part of the work described the close interplay of the first phase separation with hydrodynamic instabilities, the second part investigated the nature and determining factors of oscillations, when the system was cooled deep into the two phase region. Both cases show how the geometry of the cell can affect the kinetics of phase separation. This study leads to further fundamental understandings of the factors contributing to the kinetics of phase separation, and to the understandings of what can be controlled and tuned in practical cases. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Ziel dieser Arbeit bestand in der Untersuchung der Störungsverteilung und der Störungskinematik im Zusammenhang mit der Hebung der Riftschultern des Rwenzori Gebirges.rnDas Rwenzori Gebirge befindet sich im NNE-SSWbis N-S verlaufenden Albertine Rift, des nördlichsten Segments des westlichen Armes des Ostafrikanischen Grabensystems. Das Albertine Rift besteht aus Becken unterschiedlicher Höhe, die den Lake Albert, Lake Edward, Lake George und Lake Kivu enthalten. Der Rwenzori horst trennt die Becken des Lake Albert und des Lake Edward. Es erstreckt sich 120km in N-S Richtung, sowie 40-50km in E-W Richtung, der h¨ochste Punkt befindet sich 5111 ü. NN. Diese Studie untersucht einen Abschnitt des Rifts zwischen etwa 1°N und 0°30'S Breite sowie 29°30' und 30°30' östlicher Länge ersteckt. Auch die Feldarbeit konzentrierte sich auf dieses Gebiet.rnrnHauptzweck dieser Studie bestand darin, die folgende These auf ihre Richtigkeit zu überprüfen: ’Wenn es im Verlauf der Zeit tatsächlich zu wesentlichen Änderungen in der Störungskinematik kam, dann ist die starke Hebung der Riftflanken im Bereich der Rwenzoris nicht einfach durch Bewegung entlang der Graben-Hauptst¨orungen zu erklären. Vielmehr ist sie ein Resultat des Zusammenspiels mehrerer tektonische Prozesse, die das Spannungsfeld beeinflussen und dadurch Änderungen in der Kinematik hervorrufen.’ Dadurch konzentrierte sich die Studie in erster Linie auf die Störungsanalyse.rnrnDie Kenntnis regionaler Änderungen der Extensionsrichtung ist entscheidend für das Verständnis komplexer Riftsysteme wie dem Ostafrikanischen Graben. Daher bestand der Kern der Untersuchung in der Kartierung von Störungen und der Untersuchung der Störungskinematik. Die Aufnahme strukturgeologischer Daten konzentrierte sich auf die Ugandische Seite des Rifts, und Pal¨aospannungen wurden mit Hilfe von St¨orungsdaten durch Spannungsinversion rekonstruiert.rnDie unterschiedliche Orientierung spr¨oder Strukturen im Gelände, die geometrische Analyse der geologischen Strukturen sowie die Ergebnisse von Mikrostrukturen im Dünnschliff (Kapitel 4) weisen auf verschiedene Spannungsfelder hin, die auf mögliche Änderungen der Extensionsrichtung hinweisen. Die Resultate der Spannungsinversion sprechen für Ab-, Über- und Blattverschiebungen sowie für Schrägüberschiebungen (Kapitel 5). Aus der Orientierung der Abschiebungen gehen zwei verschiedene Extensionsrichtungen hervor: im Wesentlichen NW-SE Extension in fast allen Gebieten, sowie NNE-SSW Extension im östlichen Zentralbereich.rnAus der Analyse von Blattverschiebungen ergaben sich drei unterschiedliche Spannungszustände. Zum Einen NNW-SSE bis N-S Kompression in Verbindung mit ENE-WSW bzw E-W Extension wurde für die nördlichen und die zentralen Ruwenzoris ausgemacht. Ein zweiter Spannungszustand mit WNW-ESE Kompression/NNE-SSW Extension betraf die Zentralen Rwenzoris. Ein dritter Spannungszustand mit NNW-SSE Extension betraf den östlichen Zentralteil der Rwenzoris. Schrägüberschiebungen sind durch dazu schräge Achsen charakterisiert, die für N-S bis NNW-SSE Kompression sprechen und ausschließlich im östlichen Zentralabschnitt auftreten. Überschiebungen, die hauptsächlich in den zentralen und den östlichen Rwenzoris auftreten, sprechen für NE-SW orientierten σ2-Achsen und NW-SE Extension.rnrnEs konnten drei unterschiedliche Spannungseinflüsse identifiziert werden: auf die kollisionsbedingte Bildung eines Überschiebungssystem folgte intra-kratonische Kompression und schließlich extensionskontrollierte Riftbildung. Der Übergang zwischen den beiden letztgenannten Spannungszuständen erfolgte Schrittweise und erzeugte vermutlich lokal begrenzte Transpression und Transtension. Gegenw¨artig wird die Störungskinematik der Region durch ein tensiles Spannungsregime in NW-SE bis N-S Richtung bestimmt.rnrnLokale Spannungsvariationen werden dabei hauptsächlich durch die Interferenzrndes regionalen Spannungsfeldes mit lokalen Hauptst¨orungen verursacht. Weitere Faktoren die zu lokalen Veränderungen des Spannungsfeldes führen können sind unterschiedliche Hebungsgeschwindigkeiten, Blockrotation oder die Interaktion von Riftsegmenten. Um den Einfluß präexistenter Strukturen und anderer Bedingungen auf die Hebung der Rwenzoris zu ermitteln, wurde der Riftprozeß mit Hilfe eines analogen ’Sandbox’-Modells rekonstruiert (Kapitel 6). Da sich die Moho-Diskontinuität im Bereich des Arbeitsgebietes in einer Tiefe von 25 km befindet, aktive Störungen aber nur bis zu einer Tiefe von etwa 20 km beobachtet werden können (Koehn et al. 2008), wurden nur die oberen 25 km im Modell nachbebildet. Untersucht und mit Geländebeobachtungen verglichen wurden sowohl die Reihenfolge, in der Riftsegmente entstehen, als auch die Muster, die sich im Verlauf der Nukleierung und des Wachstums dieser Riftsegmente ausbilden. Das Hauptaugenmerk wurde auf die Entwicklung der beiden Subsegmente gelegt auf denen sich der Lake Albert bzw. der Lake Edward und der Lake George befinden, sowie auf das dazwischenliegende Rwenzori Gebirge. Das Ziel der Untersuchung bestand darin herauszufinden, in welcher Weise das südwärts propagierende Lake Albert-Subsegment mit dem sinistral versetzten nordwärts propagierenden Lake Edward/Lake George-Subsegment interagiert.rnrnVon besonderem Interesse war es, in welcherWeise die Strukturen innerhalb und außerhalb der Rwenzoris durch die Interaktion dieser Riftsegmente beeinflußt wurden. rnrnDrei verschiedene Versuchsreihen mit unterschiedlichen Randbedingungen wurden miteinander verglichen. Abhängig vom vorherrschenden Deformationstyp der Transferzone wurden die Reihen als ’Scherungs-dominiert’, ’Extensions-dominiert’ und als ’Rotations-dominiert’ charakterisiert. Die Beobachtung der 3-dimensionalen strukturellen Entwicklung der Riftsegmente wurde durch die Kombination von Modell-Aufsichten mit Profilschnitten ermöglicht. Von den drei genannten Versuchsreihen entwickelte die ’Rotationsdominierten’ Reihe einen rautenförmiger Block im Tranferbereich der beiden Riftsegmente, der sich um 5−20° im Uhrzeigersinn drehte. DieserWinkel liegt im Bereich des vermuteten Rotationswinkel des Rwenzori-Blocks (5°). Zusammengefasst untersuchen die Sandbox-Versuche den Einfluss präexistenter Strukturen und der Überlappung bzw. Überschneidung zweier interagierender Riftsegmente auf die Entwicklung des Riftsystems. Sie befassen sich darüber hinaus mit der Frage, welchen Einfluss Blockbildung und -rotation auf das lokale Stressfeld haben.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa ricerca, suddivisa in due parti, si concentra sulle problematiche connesse alla normazione della vita e dei corpi delle donne al tempo delle biotecnologie. La Parte I è una genealogia filosofico-politica che ripercorre le tappe analitico-concettuali del dibattito intorno a biopolitica e tecnoscienza, a partire dai contributi teorici di poststrutturalismo e femminismo neomaterialista, e risponde alle domande: Cosa sono diventati i corpi nell’attuale società bio-info-modificata? Qual'è il ruolo delle scienze nelle metamorfosi che interessano soggettività e rapporti di potere? La Parte II è una cartografia dei modi in cui le biotecnologie, riguardanti i corpi delle donne, si sono sviluppate e diffuse. Essa indaga in modo transdisciplinare come in Italia, e più in generale in Europa, sono state normate le tecniche di interruzione volontaria di gravidanza e fecondazione in vitro. Ampio spazio è dedicato ai modi in cui gli attori della bioetica, istituzionale e non, i medici, laici e cattolici, e le case farmaceutiche hanno affrontato questi temi e quello della contraccezione ormonale maschile. Medicina riproduttiva e rigenerativa sono tematizzate sempre in relazione al quadro normativo, per mostrare in che modo esso influenzi l’accesso ai diritti alla salute e all’autodeterminazione delle donne. Il quadro normativo è analizzato, a sua volta, alla luce dei fatti storici più rilevanti e delle culture più diffuse. L’obiettivo della ricerca è duplice: da un lato essa ha il fine di mostrare il modo in cui i corpi delle donne, e la relativa potenza generatrice, siano diventati uno snodo fondamentale nell’articolazione del biocontrollo e nell'apertura dei nuovi mercati legati a medicina riproduttiva e rigenerativa; dall’altro si propone di argomentare come un biodiritto flessibile, a contenuto storico variabile, condiviso e partecipato, sia un’ipotesi praticabile e virtuosa, utile all'eliminazione del gender gap ancora esistente in materia di diritti riproduttivi e sessuali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective was to study changes in plasma leptin concentration parallel to changes in the gene expression of lipogenic- and lipolytic-related genes in adipose tissue of dairy cows around parturition. Subcutaneous fat biopsies were taken from 27 dairy cows in week 8 antepartum (a.p.), on day 1 postpartum (p.p.) and in week 5 p.p. Blood samples were assayed for concentrations of leptin and non-esterified fatty acids (NEFA). Subcutaneous adipose tissue was analysed for mRNA abundance by real-time qRT-PCR encoding for leptin, adiponectin receptor 1 (AdipoR1), adiponectin receptor 2 (AdipoR2), hormones-sensitive lipase (HSL), perilipin (PLIN), lipoprotein lipase (LPL), acyl-CoA synthase long-chain family member 1 (ACSL1), acetyl-CoA carboxylase (ACC), fatty acid synthase (FASN) and glycerol-3-phosphate dehydrogenase 2 (GPD2). Body weight and body condition score of the cows were lower after parturition than before parturition. The calculated energy balance was negative in week 1 and 5 p.p., with higher negative energy balance in week 1 p.p. compared with that in week 5 p.p. On day 1 p.p., highest concentrations of NEFA (353.3 mumol/l) were detected compared with the other biopsy time-points (210.6 and 107.7 mumol/l, in week 8 a.p., and week 5 p.p. respectively). Reduced plasma concentrations of leptin during p.p. when compared with a.p. would favour increasing metabolic efficiency and energy conservation for mammary function and reconstitution of body reserves. Lower mRNA abundance of ACC and FASN expression on day 1 p.p. compared with other biopsy time-points suggests an attenuation of fatty acid synthesis in subcutaneous adipose tissue shortly after parturition. Gene expression of AdipoR1, AdipoR2, HSL, PLIN, LPL, ACSL1 and GPD2 was unchanged over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mr. Michl posed the question of how the institutional framework that the former communist regime set up around art production contributed to the success of Czech applied arts. In his theoretical review of the question he discussed the reasons for the lack of success of socialist industrial design as opposed to what he terms pre-industrial arts (such as art glass), and also for the current lack of interest into art institutions of the past regime. His findings in the second, historical section of his work were based largely on interviews with artists and other insiders, as an initial attempt to use questionnaires was unsuccessful. His original assumption that the institutional framework was imposed on artists against their will in fact proved mistaken, as it turned out to have been proposed by the artists themselves. The basic blueprint for communist art institutions was the Memorandum document published on behalf of Czechoslovak visual artists in March 1947, i.e. before the communist coup of February 1948. Thus, while the communist state provided a beneficial institutional framework for artists' work, it was the artists themselves who designed this framework. Mr. Michl concludes that the text of the memorandum appealed to the general left-wing and anti-market sentiments of the immediate post-war period and by this and by later working through the administrative channels of the new state, the artists succeeded in gaining all of their demands over the next 15 years. The one exception was artistic freedom, although this they came to enjoy, if only by default and for a short time, during the ideological thaw of the 1960s. Mr. Michl also examined the art-related legislative framework in detail and looked at the main features of key art institutions in the field, such as the Czech Fund for Visual Arts and the 1960s art export enterprise Art Centrum, which opened the doors into foreign markets for artists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project intertwines philosophical and historico-literary themes, taking as its starting point the concept of tragic consciousness inherent in the epoch of classicism. The research work makes use of ontological categories in order to describe the underlying principles of the image of the world which was created in philosophical and scientific theories of the 17th century as well as in contemporary drama. Using these categories brought Mr. Vilk to the conclusion that the classical picture of the world implied a certain dualism; not the Manichaean division between light and darkness but the discrimination between nature and absolute being, i.e. God. Mr. Vilk begins with an examination of the philosophical essence of French classical theatre of the XVII and XVIII centuries. The history of French classical tragedy can be divided into three periods: from the mid 17th to early 19th centuries when it triumphed all over France and exerted a powerful influence over almost all European countries; followed by the period of its rejection by the Romantics, who declared classicism to be "artificial and rational"; and finally our own century which has taken a more moderate line. Nevertheless, French classical tragedy has never fully recovered its status. Instead, it is ancient tragedy and the works of Shakespeare that are regarded to be the most adequate embodiment of the tragic. Consequently they still provoke a great number of new interpretations ranging from specialised literary criticism to more philosophical rumination. An important feature of classical tragedy is a system of rules and unities which reveals a hidden ontological structure of the world. The ontological picture of the dramatic world can be described in categories worked out by medieval philosophy - being, essence and existence. The first category is to be understood as a tendency toward permanency and stability (within eternity) connected with this or that fragment of dramatic reality. The second implies a certain set of permanent elements that make up the reality. And the third - existence - should be understood as "an act of being", as a realisation of permanently renewed processes of life. All of these categories can be found in every artistic reality but the accents put on one or another and their interrelations create different ontological perspectives. Mr. Vilk plots the movement of thought, expressed in both philosophical and scientific discourses, away from Aristotle's essential forms, and towards a prioritising of existence, and shows how new forms of literature and drama structured the world according to these evolving requirements. At the same time the world created in classical tragedy fully preserves another ontological paradigm - being - as a fundamental permanence. As far as the tragic hero's motivations are concerned this paradigm is revealed in the dedication of his whole self to some cause, and his oath of fidelity, attitudes which shape his behaviour. It may be the idea of the State, or personal honour, or something borrowed from the emotional sphere, passionate love. Mr. Vilk views the conflicting ambivalence of existence and being, duty as responsibility and duty as fidelity, as underlying the main conflict of classical tragedy of the 17th century. Having plotted the movement of the being/existence duality through its manifestations in 17th century tragedy, Mr. Vilk moves to the 18th century, when tragedy took a philosophical turn. A dualistic view of the world became supplanted by the Enlightenment idea of a natural law, rooted in nature. The main point of tragedy now was to reveal that such conflicts as might take place had an anti-rational nature, that they arose as the result of a kind of superstition caused by social reasons. These themes Mr. Vilk now pursues through Russian dramatists of the 18th and early 19th centuries. He begins with Sumarakov, whose philosophical thought has a religious bias. According to Sumarakov, the dualism of the divineness and naturalness of man is on the one hand an eternal paradox, and on the other, a moral challenge for humans to try to unite the two opposites. His early tragedies are not concerned with social evils or the triumph of natural feelings and human reason, but rather the tragic disharmony in the nature of man and the world. Mr Vilk turns next to the work of Kniazhnin. He is particularly keen to rescue his reputation from the judgements of critics who accuse him of being imitative, and in order to do so, analyses in detail the tragedy "Dido", in which Kniazhnin makes an attempt to revive the image of great heroes and city-founders. Aeneas represents the idea of the "being" of Troy, his destiny is the re-establishment of the city (the future Rome). The moral aspect behind this idea is faithfulness, he devotes himself to Gods. Dido is also the creator of a city, endowed with "natural powers" and abilities, but her creation is lacking internal stability grounded in "being". The unity of the two motives is only achieved through Dido's sacrifice of herself and her city to Aeneus. Mr Vilk's next subject is Kheraskov, whose peculiarity lies in the influence of free-mason mysticism on his work. This section deals with one of the most important philosophical assumptions contained in contemporary free-mason literature of the time - the idea of the trinitarian hierarchy inherent in man and the world: body - soul - spirit, and nature - law - grace. Finally, Mr. Vilk assess the work of Ozerov, the last major Russian tragedian. The tragedies which earned him fame, "Oedipus in Athens", "Fingal" and "Dmitri Donskoi", present a compromise between the Enlightenment's emphasis on harmony and ontological tragic conflict. But it is in "Polixene" that a real meeting of the Russian tradition with the age-old history of the genre takes place. The male and female characters of "Polixene" distinctly express the elements of "being" and "existence". Each of the participants of the conflict possesses some dominant characteristic personifying a certain indispensable part of the moral world, a certain "virtue". But their independent efforts are unable to overcome the ontological gap separating them. The end of the tragedy - Polixene's sacrificial self-immolation - paradoxically combines the glorification of each party involved in the conflict, and their condemnation. The final part of Mr. Vilk's research deals with the influence of "Polixene" upon subsequent dramatic art. In this respect Katenin's "Andromacha", inspired by "Polixene", is important to mention. In "Andromacha" a decisive divergence from the principles of the philosophical tragedy of Russian classicism and the ontology of classicism occurs: a new character appears as an independent personality, directed by his private interest. It was Katenin who was to become the intermediary between Pushkin and classical tragedy.