978 resultados para Derived series
Resumo:
Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.
Resumo:
The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.
Resumo:
Flickering is a phenomenon related to mass accretion observed among many classes of astrophysical objects. In this paper we present a study of flickering emission lines and the continuum of the cataclysmic variable V3885 Sgr. The flickering behavior was first analyzed through statistical analysis and the power spectra of lightcurves. Autocorrelation techniques were then employed to estimate the flickering timescale of flares. A cross-correlation study between the line and its underlying continuum variability is presented. The cross-correlation between the photometric and spectroscopic data is also discussed. Periodograms, calculated using emission-line data, show a behavior that is similar to those obtained from photometric datasets found in the literature, with a plateau at lower frequencies and a power-law at higher frequencies. The power-law index is consistent with stochastic events. The cross-correlation study indicates the presence of a correlation between the variability on Ha and its underlying continuum. Flickering timescales derived from the photometric data were estimated to be 25 min for two lightcurves and 10 min for one of them. The average timescales of the line flickering is 40 min, while for its underlying continuum it drops to 20 min.
Resumo:
P>Many hemoglobin-derived peptides are present in mouse brain, and several of these have bioactive properties including the hemopressins, a related series of peptides that bind to cannabinoid CB1 receptors. Although hemoglobin is a major component of red blood cells, it is also present in neurons and glia. To examine whether the hemoglobin-derived peptides in brain are similar to those present in blood and heart, we used a peptidomics approach involving mass spectrometry. Many hemoglobin-derived peptides are found only in brain and not in blood, whereas all hemoglobin-derived peptides found in heart were also seen in blood. Thus, it is likely that the majority of the hemoglobin-derived peptides detected in brain are produced from brain hemoglobin and not erythrocytes. We also examined if the hemopressins and other major hemoglobin-derived peptides were regulated in the Cpefat/fat mouse; previously these mice were reported to have elevated levels of several hemoglobin-derived peptides. Many, but not all of the hemoglobin-derived peptides were elevated in several brain regions of the Cpefat/fat mouse. Taken together, these findings suggest that the post-translational processing of alpha and beta hemoglobin into the hemopressins, as well as other peptides, is up-regulated in some but not all Cpefat/fat mouse brain regions.
Resumo:
This thesis consists of four manuscripts in the area of nonlinear time series econometrics on topics of testing, modeling and forecasting nonlinear common features. The aim of this thesis is to develop new econometric contributions for hypothesis testing and forecasting in these area. Both stationary and nonstationary time series are concerned. A definition of common features is proposed in an appropriate way to each class. Based on the definition, a vector nonlinear time series model with common features is set up for testing for common features. The proposed models are available for forecasting as well after being well specified. The first paper addresses a testing procedure on nonstationary time series. A class of nonlinear cointegration, smooth-transition (ST) cointegration, is examined. The ST cointegration nests the previously developed linear and threshold cointegration. An Ftypetest for examining the ST cointegration is derived when stationary transition variables are imposed rather than nonstationary variables. Later ones drive the test standard, while the former ones make the test nonstandard. This has important implications for empirical work. It is crucial to distinguish between the cases with stationary and nonstationary transition variables so that the correct test can be used. The second and the fourth papers develop testing approaches for stationary time series. In particular, the vector ST autoregressive (VSTAR) model is extended to allow for common nonlinear features (CNFs). These two papers propose a modeling procedure and derive tests for the presence of CNFs. Including model specification using the testing contributions above, the third paper considers forecasting with vector nonlinear time series models and extends the procedures available for univariate nonlinear models. The VSTAR model with CNFs and the ST cointegration model in the previous papers are exemplified in detail,and thereafter illustrated within two corresponding macroeconomic data sets.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Introduction: Schwannomas are benign and not very frequent tumors of the peripheral nerves, derived from the nerve supporting Schwann cells. Study Design: Data were collected on the clinical manifestations (sex, age), location, size and symptonts of the lesions as well as the evolution time and the initial (presumption) diagnosis. Results: Twelve patients were documented, with a mean age of 29,5 ± 12,1 years (range 16-50) and a balanced gender distribution. The mean duration of the lesions was 42,17± 45,3 months. The lesion located in the floor of the mouth was the largest tumor, measuring about 4 cm in maximum diameter, while the average size of the 12 schwannomas was 2.04± 1.1 cm. Conclusion: We present 12 oral schwannomas diagnosed and treated over a period of 10 years. © Medicina Oral S. L. C.I.F. B 96689336 - eISSN: 1989-5488.
Resumo:
Azide-alkyne Huisgen click chemistry provides new synthetic routes for making thermoplastic polytriazole polymers-without solvent or catalyst. This method was used to polymerize three diester dialkyne monomers with a lipid derived 18 carbon diazide to produce a series of polymers (labelled C18C18, C18C9, and C18C4 based on monomer chain lengths) free of residual solvent and catalyst. Three diester dialkyne monomers were synthesized with ester chain lengths of 4, 9, and 18 carbons from renewable sources. Significant differences in thermal and mechanical properties were observed between C18C9 and the two other polymers. C18C9 presented a lower melting temperature, higher elongation at break, and reduced Young's modulus compared to C18C4 and C18C18. This was due to the odd-even effect induced by the number of carbon atoms in the monomers which resulted in orientation of the ester linkages of C18C9 in the same direction, thereby reducing hydrogen bonding. The thermoplastic polytriazoles presented are novel polymers derived from vegetable oil with favourable mechanical and thermal properties suitable for a large range of applications where no residual solvent or catalyst can be tolerated. Their added potential biocompatibility and biodegradability make them ideal for applications in the medical and pharmaceutical industries.
Resumo:
The synthesis of a series of omega-hydroxyfatty acid (omega-OHFA) monomers and their methyl ester derivatives (Me-omega-OHFA) from mono-unsaturated fatty acids and alcohols via ozonolysis-reduction/crossmetathesis reactions is described. Melt polycondensation of the monomers yielded thermoplastic poly(omega-hydroxyfatty acid)s [-(CH2)(n)-COO-](x) with medium (n = 8 and 12) and long (n = 17) repeating monomer units. The omega-OHFAs and Me-omega-OHFAs were all obtained in good yield (>= 80%) and purity (>= 97%) as established by H-1 NMR, Fourier Transform infra-red spectroscopy (FT-IR), mass spectroscopy (ESI-MS) and high performance liquid chromatography (HPLC) analyses. The average molecular size (M-n) and distribution (PDI) of the poly(omega-hydroxyfatty acid)s (P(omega-OHFA)s) and poly(omega-hydroxyfatty ester) s (P(Me-omega-OHFA) s) as determined by GPC varied with organo-metallic Ti(IV) isopropoxide [Ti(OiPr)(4)] polycondensation catalyst amount, reaction time and temperature. An optimization of the polymerization process provided P(omega-OHFA) s and P(Me-omega-OHFA) s with M-n and PDI values desirable for high end applications. Co-polymerization of the long chain (n = 12) and medium chain (n = 8) Me-omega-OHFAs by melt polycondensation yielded poly(omega-hydroxy tridecanoate/omega-hydroxy nonanoate) random co-polyesters (M-n = 11000- 18500 g mol(-1)) with varying molar compositions.
Resumo:
The physical properties of three vegetable oil derived medium and long chain poly(-hydroxy fatty ester)s (P(Me--OHFA)s), namely poly(-hydroxynonanoate) [P(Me--OHC9)], poly(-hydroxytridecanoate) [P(Me--OHC13)] and poly(-hydroxyoctadecanoate) [P(Me--OHC18)] (n = 8, 12 and 17, respectively), of the [-(CH2)(n)-COO-](x) polyester homologous series are presented. The effect of M-n (M-n 10-40 kg mol(-1)) and n on the crystal structure and thermal and mechanical properties of the P(Me--OHFA)s were investigated by wide-angle X-ray diffraction (WAXD), TGA, DSC, dynamic mechanical analysis (DMA) and tensile analysis and are discussed in the context of the [-(CH2)(n)-COO-](x) polyester homologous series, contrasted with linear polyethylene (PE). For all P(Me--OHFA)s the WAXD data indicated an orthorhombic crystal phase reminiscent of linear PE with crystallinity (X-c = 50%-80%) depending strongly on M-n. The glass transition temperature and Young's modulus for P(Me--OHFA)s increased with X-c. The DSC, DMA and TGA studies for P(Me--OHFA)s (n = 8, 12 and 17) indicated strong correlations between the melting, glass transition and thermal degradation behavior and n. The established predictive structure relationships can be used for the custom engineering of polyester materials suitable for specialty and commodity applications. (c) 2014 Society of Chemical Industry
Resumo:
The theoretical E-curve for the laminar flow of non-Newtonian fluids in circular tubes may not be accurate for real tubular systems with diffusion, mechanical vibration, wall roughness, pipe fittings, curves, coils, or corrugated walls. Deviations from the idealized laminar flow reactor (LFR) cannot be well represented using the axial dispersion or the tanks-in-series models of residence time distribution (RTD). In this work, four RTD models derived from non-ideal velocity profiles in segregated tube flow are proposed. They were used to represent the RTD of three tubular systems working with Newtonian and pseudoplastic fluids. Other RTD models were considered for comparison. The proposed models provided good adjustments, and it was possible to determine the active volumes. It is expected that these models can be useful for the analysis of LFR or for the evaluation of continuous thermal processing of viscous foods.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
[EN]Experimental solubility data are presented for a set of binary systems composed of ionic liquids (IL) derived from pyridium, with the tetrafluoroborate anion, and normal alcohols ranging from ethanol to decanol, in the temperature interval of 275 420 K, at atmospheric pressure. For each case, the miscibility curve and the upper critical solubility temperature (UCST) values are presented. The effects of the ILs on the behavior of solutions with alkanols are analyzed, paying special attention to the pyridine derivatives, and considering a series of structural characteristics of the compounds involved.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.