977 resultados para Time variable gravity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present manuscript represents the completion of a research path carried forward during my doctoral studies in the University of Turku. It contains information regarding my scientific contribution to the field of open quantum systems, accomplished in collaboration with other scientists. The main subject investigated in the thesis is the non-Markovian dynamics of open quantum systems with focus on continuous variable quantum channels, e.g. quantum Brownian motion models. Non-Markovianity is here interpreted as a manifestation of the existence of a flow of information exchanged by the system and environment during the dynamical evolution. While in Markovian systems the flow is unidirectional, i.e. from the system to the environment, in non-Markovian systems there are time windows in which the flow is reversed and the quantum state of the system may regain coherence and correlations previously lost. Signatures of a non-Markovian behavior have been studied in connection with the dynamics of quantum correlations like entanglement or quantum discord. Moreover, in the attempt to recognisee non-Markovianity as a resource for quantum technologies, it is proposed, for the first time, to consider its effects in practical quantum key distribution protocols. It has been proven that security of coherent state protocols can be enhanced using non-Markovian properties of the transmission channels. The thesis is divided in two parts: in the first part I introduce the reader to the world of continuous variable open quantum systems and non-Markovian dynamics. The second part instead consists of a collection of five publications inherent to the topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chaotic dynamical systems exhibit trajectories in their phase space that converges to a strange attractor. The strangeness of the chaotic attractor is associated with its dimension in which instance it is described by a noninteger dimension. This contribution presents an overview of the main definitions of dimension discussing their evaluation from time series employing the correlation and the generalized dimension. The investigation is applied to the nonlinear pendulum where signals are generated by numerical integration of the mathematical model, selecting a single variable of the system as a time series. In order to simulate experimental data sets, a random noise is introduced in the time series. State space reconstruction and the determination of attractor dimensions are carried out regarding periodic and chaotic signals. Results obtained from time series analyses are compared with a reference value obtained from the analysis of mathematical model, estimating noise sensitivity. This procedure allows one to identify the best techniques to be applied in the analysis of experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early stimulation has been shown to produce long-lasting effects in many species. Prenatal exposure to some strong stressors may affect development of the nervous system leading to behavioral impairment in adult life. The purpose of the present work was to study the postnatal harmful effects of exposure to variable mild stresses in rats during pregnancy. Female Holtzman rats were submitted daily to one session of a chronic variable stress (CVS) during pregnancy (prenatal stress; PS group). Control pregnant rats (C group) were undisturbed. The pups of PS and C dams were weighed and separated into two groups 48 h after delivery. One group was maintained with their own dams (PS group, N = 70; C group, N = 36) while the other PS pups were cross-fostered with C dams (PSF group, N = 47) and the other C pups were cross-fostered with PS dams (CF group, N = 58). Pups were undisturbed until weaning (postnatal day 28). The male offspring underwent motor activity tests (day 28), enriched environment tests (day 37) and social interaction tests (day 42) in an animal activity monitor. Body weight was recorded on days 2, 28 and 60. The PS pups showed lower birth weight than C pups (Duncan's test, P<0.05). The PS pups suckling with their stressed mothers displayed greater preweaning mortality (C: 23%, PS: 60%; c2 test, P<0.05) and lower body weight than controls at days 28 and 60 (Duncan's test, P<0.05 and P<0.01, respectively). The PS, PSF and CF groups showed lower motor activity scores than controls when tested at day 28 (Duncan's test, P<0.01 for PS group and P<0.05 for CF and PSF groups). In the enriched environment test performed on day 37, between-group differences in total motor activity were not detected; however, the PS, CF and PSF groups displayed less exploration time than controls (Duncan's test, P<0.05). Only the PS group showed impaired motor activity and impaired social behavior at day 42 (Duncan's test, P<0.05). In fact, CVS treatment during gestation plus suckling with a previously stressed mother caused long-lasting physical and behavioral changes in rats. Cross-fostering PS-exposed pups to a dam which was not submitted to stress counteracted most of the harmful effects of the treatment. It is probable that prenatal stress plus suckling from a previously stressed mother can induce long-lasting changes in the neurotransmitter systems involved in emotional regulation. Further experiments using neurochemical and pharmacological approaches would be interesting in this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies have shown a time-of-day of training effect on long-term explicit memory with a greater effect being shown in the afternoon than in the morning. However, these studies did not control the chronotype variable. Therefore, the purpose of this study was to assess if the time-of-day effect on explicit memory would continue if this variable were controlled, in addition to identifying the occurrence of a possible synchronic effect. A total of 68 undergraduates were classified as morning, intermediate, or afternoon types. The subjects listened to a list of 10 words during the training phase and immediately performed a recognition task, a procedure which they repeated twice. One week later, they underwent an unannounced recognition test. The target list and the distractor words were the same in all series. The subjects were allocated to two groups according to acquisition time: a morning group (N = 32), and an afternoon group (N = 36). One week later, some of the subjects in each of these groups were subjected to a test in the morning (N = 35) or in the afternoon (N = 33). The groups had similar chronotypes. Long-term explicit memory performance was not affected by test time-of-day or by chronotype. However, there was a training time-of-day effect [F (1,56) = 53.667; P = 0.009] with better performance for those who trained in the afternoon. Our data indicated that the advantage of training in the afternoon for long-term memory performance does not depend on chronotype and also that this performance is not affected by the synchronic effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluid handling systems account for a significant share of the global consumption of electrical energy. They also suffer from problems, which reduce their energy efficiency and increase life-cycle costs. Detecting or predicting these problems in time can make fluid handling systems more environmentally and economically sustainable to operate. In this Master’s Thesis, significant problems in fluid systems were studied and possibilities to develop variable-speed-drive-based detection methods for them was discussed. A literature review was conducted to find significant problems occurring in fluid handling systems containing pumps, fans and compressors. To find case examples for evaluating the feasibility of variable-speed-drive-based methods, queries were sent to industrial companies. As a result of this, the possibility to detect heat exchanger fouling with a variable-speed drive was analysed with data from three industrial cases. It was found that a mass flow rate estimate, which can be generated with a variable speed drive, can be used together with temperature measurements to monitor a heat exchanger’s thermal performance. Secondly, it was found that the fouling-related increase in the pressure drop of a heat exchanger can be monitored with a variable speed drive. Lastly, for systems where the flow device is speed controlled with by a pressure measurement, it was concluded that increasing rotational speed can be interpreted as progressing fouling in the heat exchanger.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pour répondre aux exigences du gouvernement fédéral quant aux temps d’attente pour les chirurgies de remplacement du genou et de la hanche, les établissements canadiens ont adopté des stratégies de gestion des listes d’attentes avec des niveaux de succès variables. Notre question de recherche visait à comprendre Quels facteurs ont permis de maintenir dans le temps un temps d’attente répondant aux exigences du gouvernement fédéral pendant au moins 6-12 mois? Nous avons développé un modèle possédant quatre facteurs, inspiré du modèle de Parsons (1977), afin d’analyser les facteurs comprenant la gouvernance, la culture, les ressources, et les outils. Trois études de cas ont été menées. En somme, le 1er cas a été capable d’obtenir les exigences pendant six mois mais incapable de les maintenir, le 2e cas a été capable de maintenir les exigences > 18 mois et le 3e cas a été incapable d’atteindre les objectifs. Des documents furent recueillis et des entrevues furent réalisées auprès des personnes impliquées dans la stratégie. Les résultats indiquent que l’hôpital qui a été en mesure de maintenir le temps d’attente possède certaines caractéristiques: réalisation exclusive de chirurgie de remplacement de la hanche et du genou, présence d’un personnel motivé, non distrait par d’autres préoccupations et un esprit d’équipe fort. Les deux autres cas ont eu à faire face à une culture médicale moins homogène et moins axés sur l’atteinte des cibles; des ressources dispersées et une politique intra-établissement imprécise. Le modèle d’hôpital factory est intéressant dans le cadre d’une chirurgie surspécialisée. Toutefois, les patients sont sélectionnés pour des chirurgies simples et dont le risque de complication est faible. Il ne peut donc pas être retenu comme le modèle durable par excellence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les étoiles naines blanches représentent la fin de l’évolution de 97% des étoiles de notre galaxie, dont notre Soleil. L’étude des propriétés globales de ces étoiles (distribution en température, distribution de masse, fonction de luminosité, etc.) requiert l’élaboration d’ensembles statistiquement complets et bien définis. Bien que plusieurs relevés d’étoiles naines blanches existent dans la littérature, la plupart de ceux-ci souffrent de biais statistiques importants pour ce genre d’analyse. L’échantillon le plus représentatif de la population d’étoiles naines blanches demeure à ce jour celui défini dans un volume complet, restreint à l’environnement immédiat du Soleil, soit à une distance de 20 pc (∼ 65 années-lumière) de celui-ci. Malheureusement, comme les naines blanches sont des étoiles intrinsèquement peu lumineuses, cet échantillon ne contient que ∼ 130 objets, compromettant ainsi toute étude statistique significative. Le but de notre étude est de recenser la population d’étoiles naines blanches dans le voisinage solaire a une distance de 40 pc, soit un volume huit fois plus grand. Nous avons ainsi entrepris de répertorier toutes les étoiles naines blanches à moins de 40 pc du Soleil à partir de SUPERBLINK, un vaste catalogue contenant le mouvement propre et les données photométriques de plus de 2 millions d’étoiles. Notre approche est basée sur la méthode des mouvements propres réduits qui permet d’isoler les étoiles naines blanches des autres populations stellaires. Les distances de toutes les candidates naines blanches sont estimées à l’aide de relations couleur-magnitude théoriques afin d’identifier les objets se situant à moins de 40 pc du Soleil, dans l’hémisphère nord. La confirmation spectroscopique du statut de naine blanche de nos ∼ 1100 candidates a ensuite requis 15 missions d’observations astronomiques sur trois grands télescopes à Kitt Peak en Arizona, ainsi qu’une soixantaine d’heures allouées sur les télescopes de 8 m des observatoires Gemini Nord et Sud. Nous avons ainsi découvert 322 nouvelles étoiles naines blanches de plusieurs types spectraux différents, dont 173 sont à moins de 40 pc, soit une augmentation de 40% du nombre de naines blanches connues à l’intérieur de ce volume. Parmi ces nouvelles naines blanches, 4 se trouvent probablement à moins de 20 pc du Soleil. De plus, nous démontrons que notre technique est très efficace pour identifier les étoiles naines blanches dans la région peuplée du plan de la Galaxie. Nous présentons ensuite une analyse spectroscopique et photométrique détaillée de notre échantillon à l’aide de modèles d’atmosphère afin de déterminer les propriétés physiques de ces étoiles, notamment la température, la gravité de surface et la composition chimique. Notre analyse statistique de ces propriétés, basée sur un échantillon presque trois fois plus grand que celui à 20 pc, révèle que nous avons identifié avec succès les étoiles les plus massives, et donc les moins lumineuses, de cette population qui sont souvent absentes de la plupart des relevés publiés. Nous avons également identifié plusieurs naines blanches très froides, et donc potentiellement très vieilles, qui nous permettent de mieux définir le côté froid de la fonction de luminosité, et éventuellement l’âge du disque de la Galaxie. Finalement, nous avons aussi découvert plusieurs objets d’intérêt astrophysique, dont deux nouvelles étoiles naines blanches variables de type ZZ Ceti, plusieurs naines blanches magnétiques, ainsi que de nombreux systèmes binaires non résolus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to study the variation in subduction zone geometry along and across the arc and the fault pattern within the subducting plate. Depth of penetration as well as the dip of the Benioff zone varies considerably along the arc which corresponds to the curvature of the fold- thrust belt which varies from concave to convex in different sectors of the arc. The entire arc is divided into 27 segments and depth sections thus prepared are utilized to investigate the average dip of the Benioff zone in the different parts of the entire arc, penetration depth of the subducting lithosphere, the subduction zone geometry underlying the trench, the arctrench gap, etc.The study also describes how different seismogenic sources are identified in the region, estimation of moment release rate and deformation pattern. The region is divided into broad seismogenic belts. Based on these previous studies and seismicity Pattern, we identified several broad distinct seismogenic belts/sources. These are l) the Outer arc region consisting of Andaman-Nicobar islands 2) the back-arc Andaman Sea 3)The Sumatran fault zone(SFZ)4)Java onshore region termed as Jave Fault Zone(JFZ)5)Sumatran fore arc silver plate consisting of Mentawai fault(MFZ)6) The offshore java fore arc region 7)The Sunda Strait region.As the Seismicity is variable,it is difficult to demarcate individual seismogenic sources.Hence, we employed a moving window method having a window length of 3—4° and with 50% overlapping starting from one end to the other. We succeeded in defining 4 sources each in the Andaman fore arc and Back arc region, 9 such sources (moving windows) in the Sumatran Fault zone (SFZ), 9 sources in the offshore SFZ region and 7 sources in the offshore Java region. Because of the low seismicity along JFZ, it is separated into three seismogenic sources namely West Java, Central Java and East Java. The Sunda strait is considered as a single seismogenic source.The deformation rates for each of the seismogenic zones have been computed. A detailed error analysis of velocity tensors using Monte—Carlo simulation method has been carried out in order to obtain uncertainties. The eigen values and the respective eigen vectors of the velocity tensor are computed to analyze the actual deformation pattem for different zones. The results obtained have been discussed in the light of regional tectonics, and their implications in terms of geodynamics have been enumerated.ln the light of recent major earthquakes (26th December 2004 and 28th March 2005 events) and the ongoing seismic activity, we have recalculated the variation in the crustal deformation rates prior and after these earthquakes in Andaman—Sumatra region including the data up to 2005 and the significant results has been presented.ln this chapter, the down going lithosphere along the subduction zone is modeled using the free air gravity data by taking into consideration the thickness of the crustal layer, the thickness of the subducting slab, sediment thickness, presence of volcanism, the proximity of the continental crust etc. Here a systematic and detailed gravity interpretation constrained by seismicity and seismic data in the Andaman arc and the Andaman Sea region in order to delineate the crustal structure and density heterogeneities a Io nagnd across the arc and its correlation with the seismogenic behaviour is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis Entitled Studies on Quasinormal modes and Late-time tails black hole spacetimes. In this thesis, the signature of these new theories are probed on the evolution of field perturbations on the black hole spacetimes in the theory. Chapter 1 gives a general introduction to black holes and its perturbation formalism. Various concepts in the area covered by the thesis are also elucidated in this chapter. Chapter 2 describes the evolution of massive, charged scalar field perturbations around a Reissner-Nordstrom black hole surrounded by a static and spherically symmetric quintessence. Chapter 3 comprises the evolution of massless scalar, electromagnetic and gravitational fields around spherically symmetric black hole whose asymptotes are defined by the quintessence, with special interest on the late-time behavior. Chapter 4 examines the evolution of Dirac field around a Schwarzschild black hole surrounded by quintessence. Detailed numerical simulations are done to analyze the nature of field on different surfaces of constant radius . Chapter 5is dedicated to the study of the evolution of massless fields around the black hole geometry in the HL gravity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.