45 resultados para multiplicative inverse
Resumo:
We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.
Resumo:
We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W, Z) where one boson decays to a dijet final state. The data correspond to 3.5 fb-1 of integrated luminosity of pp̅ collisions at √s=1.96 TeV collected by the CDF II detector at the Fermilab Tevatron. We observe 1516±239(stat)±144(syst) diboson candidate events and measure a cross section σ(pp̅ →VV+X) of 18.0±2.8(stat)±2.4(syst)±1.1(lumi) pb, in agreement with the expectations of the standard model.
Resumo:
We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W,Z) where one boson decays to a dijet final state . The data correspond to 3.5 inverse femtobarns of integrated luminosity of ppbar collisions at sqrt(s)=1.96 TeV collected by the CDFII detector at the Fermilab Tevatron. We observe 1516+/-239(stat)+/-144(syst) diboson candidate events and measure a cross section sigma(ppbar->VV+X) of 18.0+/-2.8(stat)+/-2.4(syst)+/-1.1(lumi) pb, in agreement with the expectations of the standard model.
New Method for Delexicalization and its Application to Prosodic Tagging for Text-to-Speech Synthesis
Resumo:
This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibil- ity to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delex- icalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The ex- periment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
In this paper I investigate the exercise policy, and the market reaction to that, of the executive stock option holders in Finland. The empirical tests are conducted with aggregated firm level data from 34 firms and 41 stock option programs. I find some evidence of an inverse relation between the exercise intensity of the options holders and the future abnormal return of the company share price. This finding is supported by the view that information about future company prospect seems to be the only theoretical attribute that could delay the exercise of the options. Moreover, a high concentration of exercises in the beginning of the exercise window is predicted and the market is expected to react to deviations from this. The empirical findings however show that the market does not react homogenously to the information revealed by the late exercises.
Resumo:
Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.
Resumo:
The aim of the dissertation is to explore the idea of philosophy as a path to happiness in classical Arabic philosophy. The starting point is in comparison of two distinct currents between the 10th and early 11th centuries, Peripatetic philosophy, represented by al-Fārābī and Ibn Sīnā, and Ismaili philosophy represented by al-Kirmānī and the Brethren of Purity. They initially offer two contrasting views about philosophy in that the attitude of the Peripatetics is rationalistic and secular in spirit, whereas for the Ismailis philosophy represents the esoteric truth behind revelation. Still, they converge in their view that the ultimate purpose of philosophy lies in its ability to lead man towards happiness. Moreover, they share a common concept of happiness as a contemplative ideal of human perfection, which refers primarily to an otherworldly state of the soul s ascent to the spiritual world. For both the way to happiness consists of two parts: theory and practice. The practical part manifests itself in the idea of the purification of the rational soul from its bodily attachments in order for it to direct its attention fully to the contemplative life. Hence, there appears an ideal of philosophical life with the goal of relative detachment from the worldly life. The regulations of the religious law in this context appear as the primary means for the soul s purification, but for all but al-Kirmānī they are complemented by auxiliary philosophical practices. The ascent to happiness, however, takes place primarily through the acquisition of theoretical knowledge. The saving knowledge consists primarily of the conception of the hierarchy of physical and metaphysical reality, but all of philosophy forms a curriculum through which the soul gradually ascends towards a spiritual state of being along an order that is inverse to the Neoplatonic emanationist hierarchy of creation. For Ismaili philosophy the ascent takes place from the exoteric religious sciences towards the esoteric philosophical knowledge. For Peripatetic philosophers logic performs the function of an instrument enabling the ascent, mathematics is treated either as propaedeutic to philosophy or as a mediator between physical and metaphysical knowledge, whereas physics and metaphysics provide the core of knowledge necessary for the attainment of happiness.
Resumo:
Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.
Resumo:
The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Hypertension is one of the major risk factors for cardiovascular morbidity. The advantages of antihypertensive therapy have been clearly demonstrated, but only about 30% of hypertensive patients have their blood pressure (BP) controlled by such treatment. One of the reasons for this poor BP control may lie in the difficulty in predicting BP response to antihypertensive treatment. The average BP reduction achieved is similar for each drug in the main classes of antihypertensive agents, but there is a marked individual variation in BP responses to any given drug. The purpose of the present study was to examine BP response to four different antihypertensive monotherapies with regard to demographic characteristics, laboratory test results and common genetic polymorphisms. The subjects of the present study are participants in the pharmacogenetic GENRES Study. A total of 208 subjects completed the whole study protocol including four drug treatment periods of four weeks, separated by four-week placebo periods. The study drugs were amlodipine, bisoprolol, hydrochlorothiazide and losartan. Both office (OBP) and 24-hour ambulatory blood pressure (ABP) measurements were carried out. BP response to study drugs were related to basic clinical characteristics, pretreatment laboratory test results and common polymorphisms in genes coding for components of the renin-angiotensin system, alpha-adducin (ADD1), beta1-adrenergic receptor (ADRB1) and beta2-adrenergic receptor (ADRB2). Age was positively correlated with BP responses to amlodipine and with OBP and systolic ABP responses to hydrochlorothiazide, while body mass index was negatively correlated with ABP responses to amlodipine. Of the laboratory test results, plasma renin activity (PRA) correlated positively with BP responses to losartan, with ABP responses to bisoprolol, and negatively with ABP responses to hydrochlorothiazide. Uniquely to this study, it was found that serum total calcium level was negatively correlated with BP responses to amlodipine, whilst serum total cholesterol level was negatively correlated with ABP responses to amlodipine. There were no significant associations of angiotensin II type I receptor 1166A/C, angiotensin converting enzyme I/D, angiotensinogen Met235Thr, ADD1 Gly460Trp, ADRB1 Ser49Gly and Gly389Arg and ADRB2 Arg16Gly and Gln27Glu polymorphisms with BP responses to the study drugs. In conclusion, this study confirmed the relationship between pretreatment PRA levels and response to three classes of antihypertensive drugs. This study is the first to note a significant inverse relation between serum calcium level and responsiveness to a calcium channel blocker. However, this study could not replicate the observations that common polymorphisms in angiotensin II type I receptor, angiotensin converting enzyme, angiotensinogen, ADD1, ADRB1, or ADRB2 genes can predict BP response to antihypertensive drugs.
Resumo:
Cow s milk allergy (CMA) affects about 2-6% of infants and young children. Environmental factors during early life are suggested to play a role in the development of allergic diseases. One of these factors is likely to be maternal diet during pregnancy and lactation. The association between maternal diet and development of CMA in offspring is not well known, but diet could contain factors that facilitate development of tolerance. After an established food allergy, another issue is gaining tolerance towards an antigen that causes symptoms. The strictness of the elimination depends on the individual level of tolerance. This study aimed at validating a questionnaire used to inquire about food allergies in children, at researching associations between maternal diet during pregnancy and lactation and subsequent development of cow s milk allergy in the offspring, and at evaluating the degree of adherence to a therapeutic elimination diet of children with CMA and factors associated with the adherence and age of recovery. These research questions were addressed in a prospective birth cohort born between 1997 and 2004 at the Tampere and Oulu University Hospitals. Altogether 6753 children of the Diabetes Prediction and Prevention (DIPP) Nutrition cohort were investigated. Questionnaires regarding allergic diseases are often used in studies without validation. High-quality valid tools are therefore needed. Two validation studies were conducted here: one by comparing parentally reported food allergies with information gathered from patient records of 1122 children, and the other one by comparing parentally reported CMA with information in the reimbursement records of special infant formulae in the registers of the Social Insurance Institution for 6753 children. Both of these studies showed that the questionnaire works well and is a valid tool for measuring food allergies in children. In the first validation study, Cohen s kappa values were within 0.71-0.88 for CMA, 0.74-0.82 for cereal allergy, and 0.66-0.86 for any reported food allergy. In the second validation study, the kappa value was 0.79, sensitivity 0.958, and specificity 0.965 for reported and diagnosed CMA. To investigate the associations between maternal diet during pregnancy and lactation and CMA in offspring, 6288 children were studied. Maternal diet during pregnancy (8th month) and lactation (3rd month) was assessed by a validated, 181-item semi-quantitative food frequency questionnaire (FFQ), and as an endpoint register-based information on diagnosed CMA was obtained from the Social Insurance Institution and complemented with parental reports of CMA in their children. The associations between maternal food consumption and CMA in offspring were analyzed by logistic regression comparing the highest and lowest quarters with two middle quarters of consumption and adjusted for several potential confounding factors. High maternal intake of milk products (OR 0.56, 95% CI 0.37-0.86 p = 0.002) was associated with a lower risk of CMA in offspring. When stratified according to maternal allergic rhinitis or asthma, a protective association of high use of milk products with CMA was seen in children of allergy-free mothers (OR 0.30, 95% CI 0.13 - 0.69, p < 0.001), but not in children of allergic mothers. Moreover, low maternal consumption of fish during pregnancy was associated with a higher risk of CMA in children of mothers with allergic rhinitis or asthma (OR 1.47, 95% CI 0.96 - 2.27 for the lowest quarter, p = 0.043). In children of nonallergic mothers, this association was not seen. Maternal diet during lactation was not associated with CMA in offspring, apart from an inverse association between citrus and kiwi fruit consumption and CMA. These results imply that maternal diet during pregnancy may contain factors protective against CMA in offspring, more so than maternal diet during lactation. These results need to be confirmed in other studies before giving recommendations to the public. To evaluate the degree of adherence to a therapeutic elimination diet in children with diagnosed CMA, food records of 267 children were studied. Subsequent food records were examined to assess the age at reintroduction of milk products to the child s diet. Nine of ten families adhered to the elimination diet of the child with extreme accuracy. Older and monosensitized children had more often small amounts of cow s milk protein in their diet (p < 0.001 for both). Adherence to the diet was not related to any other sociodemographic factor studied or to the age at reintroduction of milk products to the diet. Low intakes of vitamin D, calcium, and riboflavin are of concern in children following a cow s milk-free diet. In summary, we found that the questionnaires used in the DIPP study are valid in investigating CMA in young children; that there are associations between maternal diet during pregnancy and lactation and the development of CMA in offspring; and that the therapeutic elimination diet in children with diagnosed CMA is rigorously adhered to.
Tiedostumaton nykytaiteessa : Katse, ääni ja aika vuosituhannen taitteen suomalaisessa nykytaiteessa
Resumo:
Leevi Haapala explores moving image works, sculptures and installations from a psychoanalytic perspective in his study The Unconscious in Contemporary Art. The Gaze, Voice and Time in Finnish Contemporary Art at the Turn of the Millennium . The artists included in the study are Eija-Liisa Ahtila, Hans-Christian Berg, Markus Copper, Liisa Lounila and Salla Tykkä. The theoretical framework includes different psychoanalytic readings of the concepts of the gaze, voice and temporality. The installations are based on spatiality and temporality, and their detailed reading emphasizes the medium-specific features of the works as well as their fragmentary nature, heterogeneity and affectivity. The study is cross-disciplinary in that it connects perspectives from the visual culture, new art history and theory to the interpretation of contemporary art. The most important concepts from psychoanalysis, affect theory and trauma discourse used in the study include affect, object a (objet petit a) as articulated by Jacques Lacan, Sigmund Freud s uncanny (das Unheimliche) and trauma. Das Unheimliche has been translated as uncanny in art history under the influence of Rosalind Krauss. The object of the study, the unconscious in contemporary art, is approached through these concepts. The study focuses on Lacan s additions to the list of partial drives: the gaze and voice as scopic and invocative drives and their interpretations in the studies of the moving image. The texts by the American film theorist and art historian Kaja Silverman are in crucial role. The study locates contemporary art as part of trauma culture, which has a tendency to define individual and historical experiences through trauma. Some of the art works point towards trauma, which may appear as a theoretic or fictitious construction. The study presents a comprehensive collection of different kinds of trauma discourse in the field of art research through the texts of Hal Foster, Cathy Caruth, Ruth Leys and Shoshana Felman. The study connects trauma theory with the theoretical analysis of the interference and discontinuity of the moving image in the readings by Susan Buck-Morss, Mary Ann Doane and Peter Osborn among others. The analysis emphasizes different ways of seeing and multisensoriality in the reception of contemporary art. With their reflections and inverse projections, the surprising mechanisms of Hans-Christian Berg s sculptures are connected with Lacan s views on the early mirroring and imitation attempts of the individual s body image. Salla Tykkä s film trilogy Cave invites one to contemplate the Lacanian theory of the gaze in relation to the experiences of being seen. The three oceanic sculpture installations by Markus Copper are studied through the vocality they create, often through an aggressive way of acting, as well as from the point of view of the functioning of an invocative drive. The study compares the work of fiction and Freud s texts on paranoia and psychosis to Eija-Liisa Ahtila s manuscripts and moving image installations about the same topic. The cinematic time in Liisa Lounila s time-slice video installations is approached through the theoretical study of the unconscious temporal structure. The viewer of the moving image is inside the work in an in-between state: in a space produced by the contents of the work and its technology. The installations of the moving image enable us to inhabit different kinds of virtual bodies or spaces, which do not correspond with our everyday experiences. Nevertheless, the works of art often try to deconstruct the identification to what has been shown on screen. This way, the viewer s attention can be fixed on his own unconscious experiences in parallel with the work s deconstructed nature as representation. The study shows that contemporary art is a central cultural practice, which allows us to discuss the unconscious in a meaningful way. The study suggests that the agency that is discursively diffuse and consists of several different praxes should be called the unconscious. The emergence of the unconscious can happen in two areas: in contemporary art through different senses and discursive elements, and in the study of contemporary art, which, being a linguistic activity is sensitive to the movements of the unconscious. One of the missions of art research is to build different kinds of articulated constructs and to open an interpretative space for the nature of art as an event.
Resumo:
Context. Turbulent fluxes of angular momentum and heat due to rotationally affected convection play a key role in determining differential rotation of stars. Aims. We compute turbulent angular momentum and heat transport as functions of the rotation rate from stratified convection. We compare results from spherical and Cartesian models in the same parameter regime in order to study whether restricted geometry introduces artefacts into the results. Methods. We employ direct numerical simulations of turbulent convection in spherical and Cartesian geometries. In order to alleviate the computational cost in the spherical runs and to reach as high spatial resolution as possible, we model only parts of the latitude and longitude. The rotational influence, measured by the Coriolis number or inverse Rossby number, is varied from zero to roughly seven, which is the regime that is likely to be realised in the solar convection zone. Cartesian simulations are performed in overlapping parameter regimes. Results. For slow rotation we find that the radial and latitudinal turbulent angular momentum fluxes are directed inward and equatorward, respectively. In the rapid rotation regime the radial flux changes sign in accordance with earlier numerical results, but in contradiction with theory. The latitudinal flux remains mostly equatorward and develops a maximum close to the equator. In Cartesian simulations this peak can be explained by the strong 'banana cells'. Their effect in the spherical case does not appear to be as large. The latitudinal heat flux is mostly equatorward for slow rotation but changes sign for rapid rotation. Longitudinal heat flux is always in the retrograde direction. The rotation profiles vary from anti-solar (slow equator) for slow and intermediate rotation to solar-like (fast equator) for rapid rotation. The solar-like profiles are dominated by the Taylor-Proudman balance.