927 resultados para Arrow Of Time
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Masticatory muscle contraction causes both jaw movement and tissue deformation during function. Natural chewing data from 25 adult miniature pigs were studied by means of time series analysis. The data set included simultaneous recordings of electromyography (EMG) from bilateral masseter (MA), zygomaticomandibularis (ZM) and lateral pterygoid muscles, bone surface strains from the left squamosal bone (SQ), condylar neck (CD) and mandibular corpus (MD), and linear deformation of the capsule of the jaw joint measured bilaterally using differential variable reluctance transducers. Pairwise comparisons were examined by calculating the cross-correlation functions. Jaw-adductor muscle activity of MA and ZM was found to be highly cross-correlated with CD and SQ strains and weakly with MD strain. No muscle’s activity was strongly linked to capsular deformation of the jaw joint, nor were bone strains and capsular deformation tightly linked. Homologous muscle pairs showed the greatest synchronization of signals, but the signals themselves were not significantly more correlated than those of non-homologous muscle pairs. These results suggested that bone strains and capsular deformation are driven by different mechanical regimes. Muscle contraction and ensuing reaction forces are probably responsible for bone strains, whereas capsular deformation is more likely a product of movement.
Resumo:
Studies of subjective time have adopted different methods to understand different processes of time perception. Four sculptures, with implied movement ranked as 1.5-, 3.0-, 4.5-, and 6.0-point stimuli on the Body Movement Ranking Scale, were randomly presented to 42 university students untrained in visual arts and ballet. Participants were allowed to observe the images for any length of time (exploration time) and, immediately after each image was observed, recorded the duration as they perceived it. The results of temporal ratio (exploration time/time estimation) showed that exploration time of images also affected perception of time, i.e., the subjective time for sculptures representing implied movement were overestimated.\
Resumo:
Electrothermomechanical MEMS are essentially microactuators that operate based on the thermoelastic effect induced by the Joule heating of the structure. They can be easily fabricated and require relatively low excitation voltages. However, the actuation time of an electrothermomechanical microdevice is higher than the actuation times related to electrostatic and piezoelectric actuation principles. Thus, in this research, we propose an optimization framework based on the topology optimization method applied to transient problems, to design electrothermomechanical microactuators for response time reduction. The objective is to maximize the integral of the output displacement of the actuator, which is a function of time. The finite element equations that govern the time response of the actuators are provided. Furthermore, the Solid Isotropic Material with Penalization model and Sequential Linear Programming are employed. Finally, a smoothing filter is implemented to control the solution. Results aiming at two distinct applications suggest the proposed approach can provide more than 50% faster actuators. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Two hundred eighty-eight 32-wk-old Hisex White laying hens were used in this research during a 10 weeks period, arranged in a 2 x 5 completely randomized factorial design, with three replicates of eight birds per treatment. Two groups: fish oil (OP) and Marine Algae (AM) with five DHA levels (120, 180, 240, 300 and 360 mg/100 g diet) were assigned including two control groups birds fed corn and soybean basal diet (CON) and a diet supplemented with AM (AM420) to study the effect of time 0, 2, 4, 6 and 8 weeks (wk) on the efficiency of egg yolk fatty acid enrichment. The means varied (p<0.01) of 17.63% (OP360) to 22.08% (AM420) is the total Polyunsaturated Fatty Acids (PUFAs) and 45.8 mg/g (OP360), 40.37 mg/g (OP360, 4 wk) to 65.82 mg/g (AM420) and 68.79 mg/g/yolk (AM120, 8 wk) for n-6 PUFAs. On the influence of sources and levels in the times, the means of n-3 PUFAs increased by 5.58 mg/g (AM120, 2 wk) to 14.16 mg/g (OP360, 6 wk) when compared to average of 3.34 mg PUFAs Ω/g/yolk (CON). Usually, the means DHA also increased from 22.34 (CON) to 176.53 mg (μ, OP360), 187.91 mg (OP360, 8 wk) and 192.96 mg (OP360, 6 wk) and 134.18 mg (μ, OP360), 135.79 mg (AM420, 6 wk), 149.75 mg DHA (AM420, 8 wk) per yolk. The opposite was observed for the means AA, so the effect of the sources, levels and times, decreased (P <0.01) of 99.83 mg (CON) to 31.99 mg (OP360, 4 wk), 40.43 mg (μ, OP360) to 61.21 mg (AM420) and 71.51 mg AA / yolk (μ, AM420). Variations of the average weight of 15.75g (OP360) to 17.08g (AM420) yolks of eggs de 32.55% (AM420) to 34.08% (OP360) of total lipids and 5.28 g (AM240) to 5.84 g (AM120) of fat in the yolk were not affected (p>0.05) by treatments, sources, levels and times studied. Starting of 2 week, the hens increased the level of n-3 PUFAs in the egg yolks, being expressively increased (p<0.01) until 4 weeks, which after the increased levels of n-3 PUFAs tended to if stabilize around of time of 8 experimental weeks, when it was more effective saturation of the tissues and yolk.
Resumo:
In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
It has been established that successful pancreas transplantation in Type 1 (insulin-dependent) diabetic patients results in normal but exaggerated phasic glucose-induced insulin secretion, normal intravenous glucose disappearance rates, improved glucose recovery from insulin-induced hypoglycaemia, improved glucagon secretion during insulin-induced hypoglycaemia, but no alterations in pancreatic polypeptide responses to hypoglycaemia. However, previous reports have not segregated the data in terms of the length of time following successful transplantation and very little prospective data collected over time in individual patients has been published. This article reports that in general there are no significant differences in the level of improvement when comparing responses as early as three months post-operatively up to as long as two years post-operatively when examining the data cross-sectionally in patients who have successfully maintained their allografts. Moreover, this remarkable constancy in pancreatic islet function is also seen in a smaller group of patients who have been examined prospectively at various intervals post-operatively. It is concluded that successful pancreas transplantation results in remarkable improvements in Alpha and Beta cell but not PP cell function that are maintained for at least one to two years.
Resumo:
Quantitative meta-analyses of randomized clinical trials investigating the specific therapeutic efficacy of homeopathic remedies yielded statistically significant differences compared to placebo. Since the remedies used contained mostly only very low concentrations of pharmacologically active compounds, these effects cannot be accounted for within the framework of current pharmacology. Theories to explain clinical effects of homeopathic remedies are partially based upon changes in diluent structure. To investigate the latter, we measured for the first time high-field (600/500 MHz) 1H T1 and T2 nuclear magnetic resonance relaxation times of H2O in homeopathic preparations with concurrent contamination control by inductively coupled plasma mass spectrometry (ICP-MS). Homeopathic preparations of quartz (10c–30c, n = 21, corresponding to iterative dilutions of 100−10–100−30), sulfur (13x–30x, n = 18, 10−13–10−30), and copper sulfate (11c–30c, n = 20, 100−11–100−30) were compared to n = 10 independent controls each (analogously agitated dilution medium) in randomized and blinded experiments. In none of the samples, the concentration of any element analyzed by ICP-MS exceeded 10 ppb. In the first measurement series (600 MHz), there was a significant increase in T1 for all samples as a function of time, and there were no significant differences between homeopathic potencies and controls. In the second measurement series (500 MHz) 1 year after preparation, we observed statistically significant increased T1 relaxation times for homeopathic sulfur preparations compared to controls. Fifteen out of 18 correlations between sample triplicates were higher for controls than for homeopathic preparations. No conclusive explanation for these phenomena can be given at present. Possible hypotheses involve differential leaching from the measurement vessel walls or a change in water molecule dynamics, i.e., in rotational correlation time and/or diffusion. Homeopathic preparations thus may exhibit specific physicochemical properties that need to be determined in detail in future investigations.
Resumo:
Ein auf Basis von Prozessdaten kalibriertes Viskositätsmodell wird vorgeschlagen und zur Vorhersage der Viskosität einer Polyamid 12 (PA12) Kunststoffschmelze als Funktion von Zeit, Temperatur und Schergeschwindigkeit angewandt. Im ersten Schritt wurde das Viskositätsmodell aus experimentellen Daten abgeleitet. Es beruht hauptsächlich auf dem drei-parametrigen Ansatz von Carreau, wobei zwei zusätzliche Verschiebungsfaktoren eingesetzt werden. Die Temperaturabhängigkeit der Viskosität wird mithilfe des Verschiebungsfaktors aT von Arrhenius berücksichtigt. Ein weiterer Verschiebungsfaktor aSC (Structural Change) wird eingeführt, der die Strukturänderung von PA12 als Folge der Prozessbedingungen beim Lasersintern beschreibt. Beobachtet wurde die Strukturänderung in Form einer signifikanten Viskositätserhöhung. Es wurde geschlussfolgert, dass diese Viskositätserhöhung auf einen Molmassenaufbau zurückzuführen ist und als Nachkondensation verstanden werden kann. Abhängig von den Zeit- und Temperaturbedingungen wurde festgestellt, dass die Viskosität als Folge des Molmassenaufbaus exponentiell gegen eine irreversible Grenze strebt. Die Geschwindigkeit dieser Nachkondensation ist zeit- und temperaturabhängig. Es wird angenommen, dass die Pulverbetttemperatur einen Molmassenaufbau verursacht und es damit zur Kettenverlängerung kommt. Dieser fortschreitende Prozess der zunehmenden Kettenlängen setzt molekulare Beweglichkeit herab und unterbindet die weitere Nachkondensation. Der Verschiebungsfaktor aSC drückt diese physikalisch-chemische Modellvorstellung aus und beinhaltet zwei zusätzliche Parameter. Der Parameter aSC,UL entspricht der oberen Viskositätsgrenze, wohingegen k0 die Strukturänderungsrate angibt. Es wurde weiterhin festgestellt, dass es folglich nützlich ist zwischen einer Fließaktivierungsenergie und einer Strukturänderungsaktivierungsenergie für die Berechnung von aT und aSC zu unterscheiden. Die Optimierung der Modellparameter erfolgte mithilfe eines genetischen Algorithmus. Zwischen berechneten und gemessenen Viskositäten wurde eine gute Übereinstimmung gefunden, so dass das Viskositätsmodell in der Lage ist die Viskosität einer PA12 Kunststoffschmelze als Folge eines kombinierten Lasersinter Zeit- und Temperatureinflusses vorherzusagen. Das Modell wurde im zweiten Schritt angewandt, um die Viskosität während des Lasersinter-Prozesses in Abhängigkeit von der Energiedichte zu berechnen. Hierzu wurden Prozessdaten, wie Schmelzetemperatur und Belichtungszeit benutzt, die mithilfe einer High-Speed Thermografiekamera on-line gemessen wurden. Abschließend wurde der Einfluss der Strukturänderung auf das Viskositätsniveau im Prozess aufgezeigt.
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.