882 resultados para Direction of time
Resumo:
Two hundred eighty-eight 32-wk-old Hisex White laying hens were used in this research during a 10 weeks period, arranged in a 2 x 5 completely randomized factorial design, with three replicates of eight birds per treatment. Two groups: fish oil (OP) and Marine Algae (AM) with five DHA levels (120, 180, 240, 300 and 360 mg/100 g diet) were assigned including two control groups birds fed corn and soybean basal diet (CON) and a diet supplemented with AM (AM420) to study the effect of time 0, 2, 4, 6 and 8 weeks (wk) on the efficiency of egg yolk fatty acid enrichment. The means varied (p<0.01) of 17.63% (OP360) to 22.08% (AM420) is the total Polyunsaturated Fatty Acids (PUFAs) and 45.8 mg/g (OP360), 40.37 mg/g (OP360, 4 wk) to 65.82 mg/g (AM420) and 68.79 mg/g/yolk (AM120, 8 wk) for n-6 PUFAs. On the influence of sources and levels in the times, the means of n-3 PUFAs increased by 5.58 mg/g (AM120, 2 wk) to 14.16 mg/g (OP360, 6 wk) when compared to average of 3.34 mg PUFAs Ω/g/yolk (CON). Usually, the means DHA also increased from 22.34 (CON) to 176.53 mg (μ, OP360), 187.91 mg (OP360, 8 wk) and 192.96 mg (OP360, 6 wk) and 134.18 mg (μ, OP360), 135.79 mg (AM420, 6 wk), 149.75 mg DHA (AM420, 8 wk) per yolk. The opposite was observed for the means AA, so the effect of the sources, levels and times, decreased (P <0.01) of 99.83 mg (CON) to 31.99 mg (OP360, 4 wk), 40.43 mg (μ, OP360) to 61.21 mg (AM420) and 71.51 mg AA / yolk (μ, AM420). Variations of the average weight of 15.75g (OP360) to 17.08g (AM420) yolks of eggs de 32.55% (AM420) to 34.08% (OP360) of total lipids and 5.28 g (AM240) to 5.84 g (AM120) of fat in the yolk were not affected (p>0.05) by treatments, sources, levels and times studied. Starting of 2 week, the hens increased the level of n-3 PUFAs in the egg yolks, being expressively increased (p<0.01) until 4 weeks, which after the increased levels of n-3 PUFAs tended to if stabilize around of time of 8 experimental weeks, when it was more effective saturation of the tissues and yolk.
Resumo:
In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
The topic of this thesis is the feedback stabilization of the attitude of magnetically actuated spacecraft. The use of magnetic coils is an attractive solution for the generation of control torques on small satellites flying inclined low Earth orbits, since magnetic control systems are characterized by reduced weight and cost, higher reliability, and require less power with respect to other kinds of actuators. At the same time, the possibility of smooth modulation of control torques reduces coupling of the attitude control system with flexible modes, thus preserving pointing precision with respect to the case when pulse-modulated thrusters are used. The principle based on the interaction between the Earth's magnetic field and the magnetic field generated by the set of coils introduces an inherent nonlinearity, because control torques can be delivered only in a plane that is orthogonal to the direction of the geomagnetic field vector. In other words, the system is underactuated, because the rotational degrees of freedom of the spacecraft, modeled as a rigid body, exceed the number of independent control actions. The solution of the control issue for underactuated spacecraft is also interesting in the case of actuator failure, e.g. after the loss of a reaction-wheel in a three-axes stabilized spacecraft with no redundancy. The application of well known control strategies is no longer possible in this case for both regulation and tracking, so that new methods have been suggested for tackling this particular problem. The main contribution of this thesis is to propose continuous time-varying controllers that globally stabilize the attitude of a spacecraft, when magneto-torquers alone are used and when a momentum-wheel supports magnetic control in order to overcome the inherent underactuation. A kinematic maneuver planning scheme, stability analyses, and detailed simulation results are also provided, with new theoretical developments and particular attention toward application considerations.
Resumo:
Diamant ist das härteste Mineral – und dazu ein Edelstein -, das unter höchstem Druck und hohen Temperaturen in tiefen kontinentalen Regionen der Erde kristallisiert. Die Mineraleinschlüsse in Diamanten werden durch die physikalische Stabilität und chemische Beständigkeit der umgebenden – eigentlich metastabilen -Diamant-Phase geschützt. Aufgrund der koexistierenden Phasenkombination ermöglichen sie, die Mineral-Entwicklung zu studieren, während deren der Einschlüssen und die Diamanten kristallisierten. rnDie Phasenkombinationen von Diamant und Chrom-Pyrop, Chrom-Diopsid, Chromit, Olivin, Graphit und Enstatit nebeneinander (teilweise in Berührungsexistenz) mit Chrom-Pyrop Einschlüssen wurden von neunundzwanzig Diamant-Proben von sechs Standorten in Südafrika (Premier, Koffiefontein, De Beers Pool, Finsch, Venetia und Koingnaas Minen) und Udachnaya (Sibirien/Russland) identifiziert und charakterisiert. Die Mineraleinschlüsse weisen z.T. kubo-oktaedrische Form auf, die unabhängig von ihren eigenen Kristallsystemen ausgebildet werden können. Das bedeutet, dass sie syngenetische Einschlüsse sind, die durch die sehr hohe Formenergie des umgebenden Diamanten morphologisch unter Zwang stehen. Aus zweidiemnsionalen Messungen der ersten Ordnung von charakteristischen Raman-Banden lassen sich relative Restdrucke in Diamanten zwischen Diamant und Einschlussmineral gewinnen; sie haben charakteristische Werte von ca. 0,4 bis 0,9 GPa um Chrom-Pyrop-Einschlüsse, 0,6 bis 2,0 GPa um Chrom-Diopsid-Einschlüsse, 0,3 bis 1,2 GPa um Olivin-Einschlüsse, 0,2 bis 1,0 GPa um Chromit-Einschlüsse, beziehungsweise 0,5 GPa um Graphit Einschlüsse.rnDie kristallstrukturellen Beziehung von Diamanten und ihren monomineralischen Einschlüssen wurden mit Hilfe der Quantifizierung der Winkelkorrelationen zwischen der [111] Richtung von Diamanten und spezifisch ausgewählten Richtungen ihrer mineralischen Einschlüsse untersucht. Die Winkelkorrelationen zwischen Diamant [111] und Chrom-Pyrop [111] oder Chromit [111] zeigen die kleinsten Verzerrungen von 2,2 bis zu 3,4. Die Chrom-Diopsid- und Olivin-Einschlüsse zeigen die Missorientierungswerte mit Diamant [111] bis zu 10,2 und 12,9 von Chrom-Diopsid [010] beziehungsweise Olivin [100].rnDie chemische Zusammensetzung von neun herausgearbeiteten (orientiertes Anschleifen) Einschlüssen (drei Chrom-Pyrop-Einschlüsse von Koffiefontein-, Finsch- und Venetia-Mine (zwei von drei koexistieren nebeneinander mit Enstatit), ein Chromit von Udachnaya (Sibirien/Russland), drei Chrom-Diopside von Koffiefontein, Koingnaas und Udachnaya (Sibirien/Russland) und zwei Olivin Einschlüsse von De Beers Pool und Koingnaas) wurden mit Hilfe EPMA und LA-ICP-MS analysiert. Auf der Grundlage der chemischen Zusammensetzung können die Mineraleinschlüsse in Diamanten in dieser Arbeit der peridotitischen Suite zugeordnet werden.rnDie Geothermobarometrie-Untersuchungen waren aufgrund der berührenden Koexistenz von Chrom-Pyrop- und Enstatit in einzelnen Diamanten möglich. Durchschnittliche Temperaturen und Drücke der Bildung sind mit ca. 1087 (± 15) C, 5,2 (± 0,1) GPa für Diamant DHK6.2 von der Koffiefontein Mine beziehungsweise ca. 1041 (± 5) C, 5,0 (± 0,1) GPa für Diamant DHF10.2 von der Finsch Mine zu interpretieren.rn
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
During the last decades magnetic circular dichroism (MCD) has attracted much interest and evolved into various experimental methods for the investigation of magnetic thin films. For example, synchrotron-based X-ray magnetic circular dichroism (XMCD) displays the absolute values of spin and orbital magnetic moments. It thereby benefits from large asymmetry values of more than 30% due to the excitation of atomic core-levels. Similarly large values are also expected for threshold photoemission magnetic circular dichroism (TPMCD). Using lasers with photon energies in the range of the sample work function this method gives access to the occupied electronic structure close to the Fermi level. However, except for the case of Ni(001) there exist only few studies on TPMCD moreover revealing much smaller asymmetries than XMCD-measurements. Also the basic physical mechanisms of TPMCD are not satisfactorily understood. In this work we therefore investigate TPMCD in one- and two-photon photoemission (1PPE and 2PPE) for ferromagnetic Heusler alloys and ultrathin Co films using ultrashort pulsed laser light. The observed dichroism is explained by a non-conventional photoemission model using spin-resolved band-structure calculations and linear response theory. For the two Heusler alloys Ni2MnGa and Co2FeSi we give first evidence of TPMCD in the regime of two-photon photoemission. Systematic investigations concerning general properties of TPMCD in 1PPE and 2PPE are carried out at ultrathin Co films grown on Pt(111). Here, photon-energy dependent measurements reveal asymmetries of 1.9% in 1PPE and 11.7% in 2PPE. TPMCD measurements at decreased work function even yield larger asymmetries of 6.2% (1PPE) and 17% (2PPE), respectively. This demonstrates that enlarged asymmetries are also attainable for the TPMCD effect on Co(111). Furthermore, we find that the TPMCD asymmetry is bulk-sensitive for 1PPE and 2PPE. This means that the basic mechanism leading to the observed dichroism must be connected to Co bulk properties; surface effects do not play a crucial role. Finally, the enhanced TPMCD asymmetries in 2PPE compared to the 1PPE case are traced back to the dominant influence of the first excitation step and the existence of a real intermediate state. The observed TPMCD asymmetries cannot be interpreted by conventional photoemission theory which only considers direct interband transitions in the direction of observation (Γ-L). For Co(111), these transitions lead to evanescent final states. The excitation to such states, however, is incompatible with the measured bulk-sensitivity of the asymmetry. Therefore, we generalize this model by proposing the TPMCD signal to arise mostly from direct interband transitions in crystallographic directions other than (Γ-L). The necessary additional momentum transfer to the excited electrons is most probably provided by electron-phonon or -magnon scattering processes. Corresponding calculations on the basis of this model are in reasonable agreement with the experimental results so that this approach represents a promising tool for a quantitative description of the TPMCD effect. The present findings encourage an implementation of our experimental technique to time- and spatially-resolved photoemission electron microscopy, thereby enabling a real time imaging of magnetization dynamics of single excited states in a ferromagnetic material on a femtosecond timescale.
Resumo:
This thesis reports on the experimental realization of nanofiber-based spectroscopy of organic molecules. The light guided by subwavelength diameter optical nanfibers exhibits a pronounced evanescent field surrounding the fiber which yields high excitation and emission collection efficiencies for molecules on or near the fiber surface.rnThe optical nanofibers used for the experiments presented in this thesis are realized as thernsub-wavelength diameter waist of a tapered optical fiber (TOF). The efficient transfer of thernlight from the nanofiber waist to the unprocessed part of the TOF depends critically on therngeometric shape of the TOF transitions which represent a nonuniformity of the TOF. Thisrnnonuniformity can cause losses due to coupling of the fundamental guided mode to otherrnmodes which are not guided by the taper over its whole length. In order to quantify the lossrnfrom the fundamental mode due to tapering, I have solved the coupled local mode equationsrnin the approximation of weak guidance for the three layer system consisting of fiber core andrncladding as well as the surrounding vacuum or air, assuming the taper shape of the TOFsrnused for the experiments presented in this thesis. Moreover, I have empirically studied therninfluence of the TOF geometry on its transmission spectra and, based on the results, I haverndesigned a nanofiber-waist TOF with broadband transmission for experiments with organicrnmolecules.rnAs an experimental demonstration of the high sensitivity of nanofiber-based surface spectroscopy, I have performed various absorption and fluorescence spectroscopy measurements on the model system 3,4,9,10-perylene-tetracarboxylic dianhydride (PTCDA). The measured homogeneous and inhomogeneous broadening of the spectra due to the interaction of the dielectric surface of the nanofiber with the surface-adsorbed molecules agrees well with the values theoretically expected and typical for molecules on surfaces. Furthermore, the self-absorption effects due to reasorption of the emitted fluorescence light by circumjacent surface-adsorbed molecules distributed along the fiber waist have been analyzed and quantified. With time-resolved measurements, the reorganization of PTCDA molecules to crystalline films and excimers can be observed and shown to be strongly catalyzed by the presence of water on the nanofiber surface. Moreover, the formation of charge-transfer complexes due to the interaction with localized surface defects has been studied. The collection efficiency of the molecular emission by the guided fiber mode has been determined by interlaced measurements of absorption and fluorescence spectra to be about 10% in one direction of the fiber.rnThe high emission collection efficiency makes optical nanofibers a well-suited tool for experiments with dye molecules embedded in small organic crystals. As a first experimental realization of this approach, terrylene-doped para-terphenyl crystals attached to the nanofiber-waist of a TOF have been studied at cryogenic temperatures via fluorescence and fluorescence excitation spectroscopy. The statistical fine structure of the fluorescence excitation spectrum for a specific sample has been observed and used to give an estimate of down to 9 molecules with center frequencies within one homogeneous width of the laser wavelength on average for large detunings from resonance. The homogeneous linewidth of the transition could be estimated to be about 190MHz at 4.5K.
Resumo:
Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.
Resumo:
Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.