949 resultados para k-Error linear complexity
Resumo:
In this work an Underactuated Cable-Driven Parallel Robot (UACDPR) that operates in the three dimensional Euclidean space is considered. The End-Effector has 6 degrees of freedom and is actuated by 4 cables, therefore from a mechanical point of view the robot is defined underconstrained. However, considering only three controlled pose variables, the degree of redundancy for the control theory can be considered one. The aim of this thesis is to design a feedback controller for a point-to-point motion that satisfies the transient requirements, and is capable of reducing oscillations that derive from the reduced number of constraints. A force control is chosen for the positioning of the End-Effector, and error with respect to the reference is computed through data measure of several sensors (load cells, encoders and inclinometers) such as cable lengths, tension and orientation of the platform. In order to express the relation between pose and cable tension, the inverse model is derived from the kinematic and dynamic model of the parallel robot. The intrinsic non-linear nature of UACDPRs systems introduces an additional level of complexity in the development of the controller, as a result the control law is composed by a partial feedback linearization, and damping injection to reduce orientation instability. The fourth cable allows to satisfy a further tension distribution constraint, ensuring positive tension during all the instants of motion. Then simulations with different initial conditions are presented in order to optimize control parameters, and lastly an experimental validation of the model is carried out, the results are analysed and limits of the presented approach are defined.
Resumo:
Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
Pancreatic β-cells are highly sensitive to suboptimal or excess nutrients, as occurs in protein-malnutrition and obesity. Taurine (Tau) improves insulin secretion in response to nutrients and depolarizing agents. Here, we assessed the expression and function of Cav and KATP channels in islets from malnourished mice fed on a high-fat diet (HFD) and supplemented with Tau. Weaned mice received a normal (C) or a low-protein diet (R) for 6 weeks. Half of each group were fed a HFD for 8 weeks without (CH, RH) or with 5% Tau since weaning (CHT, RHT). Isolated islets from R mice showed lower insulin release with glucose and depolarizing stimuli. In CH islets, insulin secretion was increased and this was associated with enhanced KATP inhibition and Cav activity. RH islets secreted less insulin at high K(+) concentration and showed enhanced KATP activity. Tau supplementation normalized K(+)-induced secretion and enhanced glucose-induced Ca(2+) influx in RHT islets. R islets presented lower Ca(2+) influx in response to tolbutamide, and higher protein content and activity of the Kir6.2 subunit of the KATP. Tau increased the protein content of the α1.2 subunit of the Cav channels and the SNARE proteins SNAP-25 and Synt-1 in CHT islets, whereas in RHT, Kir6.2 and Synt-1 proteins were increased. In conclusion, impaired islet function in R islets is related to higher content and activity of the KATP channels. Tau treatment enhanced RHT islet secretory capacity by improving the protein expression and inhibition of the KATP channels and enhancing Synt-1 islet content.
Resumo:
One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.
Resumo:
This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A-- water bath at 74ºC for 9 h; B-- water bath at 74ºC for 8 h and temperature increased to 100ºC for 1 h; C-- water bath at 74ºC for 2 h and temperature increased to 100ºC for 1 h;; and D-- water bath at 120ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 sec was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.
Resumo:
This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A) water bath at 74 ºC for 9 h; B) water bath at 74 ºC for 8 h and temperature increased to 100 ºC for 1 h; C) water bath at 74 ºC for 2 h and temperature increased to 100 ºC for 1 h; and D) water bath at 120 ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37 ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 s was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.
Resumo:
Dipyrone (metamizole) is an analgesic pro-drug used to control moderate pain. It is metabolized in two major bioactive metabolites: 4-methylaminoantipyrine (4-MAA) and 4-aminoantipyrine (4-AA). The aim of this study was to investigate the participation of peripheral CB1 and CB2 cannabinoid receptors activation in the anti-hyperalgesic effect of dipyrone, 4-MAA or 4-AA. PGE2 (100ng/50µL/paw) was locally administered in the hindpaw of male Wistar rats, and the mechanical nociceptive threshold was quantified by electronic von Frey test, before and 3h after its injection. Dipyrone, 4-MAA or 4-AA was administered 30min before the von Frey test. The selective CB1 receptor antagonist AM251, CB2 receptor antagonist AM630, cGMP inhibitor ODQ or KATP channel blocker glibenclamide were administered 30min before dipyrone, 4-MAA or 4-AA. The antisense-ODN against CB1 receptor expression was intrathecally administered once a day during four consecutive days. PGE2-induced mechanical hyperalgesia was inhibited by dipyrone, 4-MAA, and 4-AA in a dose-response manner. AM251 or ODN anti-sense against neuronal CB1 receptor, but not AM630, reversed the anti-hyperalgesic effect mediated by 4-AA, but not by dipyrone or 4-MAA. On the other hand, the anti-hyperalgesic effect of dipyrone or 4-MAA was reversed by glibenclamide or ODQ. These results suggest that the activation of neuronal CB1, but not CB2 receptor, in peripheral tissue is involved in the anti-hyperalgesic effect of 4-aminoantipyrine. In addition, 4-methylaminoantipyrine mediates the anti-hyperalgesic effect by cGMP activation and KATP opening.
Resumo:
77
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
X-ray fluorescence (XRF) is a fast, low-cost, nondestructive, and truly multielement analytical technique. The objectives of this study are to quantify the amount of Na(+) and K(+) in samples of table salt (refined, marine, and light) and to compare three different methodologies of quantification using XRF. A fundamental parameter method revealed difficulties in quantifying accurately lighter elements (Z < 22). A univariate methodology based on peak area calibration is an attractive alternative, even though additional steps of data manipulation might consume some time. Quantifications were performed with good correlations for both Na (r = 0.974) and K (r = 0.992). A partial least-squares (PLS) regression method with five latent variables was very fast. Na(+) quantifications provided calibration errors lower than 16% and a correlation of 0.995. Of great concern was the observation of high Na(+) levels in low-sodium salts. The presented application may be performed in a fast and multielement fashion, in accordance with Green Chemistry specifications.
Resumo:
A method using the ring-oven technique for pre-concentration in filter paper discs and near infrared hyperspectral imaging is proposed to identify four detergent and dispersant additives, and to determine their concentration in gasoline. Different approaches were used to select the best image data processing in order to gather the relevant spectral information. This was attained by selecting the pixels of the region of interest (ROI), using a pre-calculated threshold value of the PCA scores arranged as histograms, to select the spectra set; summing up the selected spectra to achieve representativeness; and compensating for the superimposed filter paper spectral information, also supported by scores histograms for each individual sample. The best classification model was achieved using linear discriminant analysis and genetic algorithm (LDA/GA), whose correct classification rate in the external validation set was 92%. Previous classification of the type of additive present in the gasoline is necessary to define the PLS model required for its quantitative determination. Considering that two of the additives studied present high spectral similarity, a PLS regression model was constructed to predict their content in gasoline, while two additional models were used for the remaining additives. The results for the external validation of these regression models showed a mean percentage error of prediction varying from 5 to 15%.
Resumo:
Ten common doubts of chemistry students and professionals about their statistical applications are discussed. The use of the N-1 denominator instead of N is described for the standard deviation. The statistical meaning of the denominators of the root mean square error of calibration (RMSEC) and root mean square error of validation (RMSEV) are given for researchers using multivariate calibration methods. The reason why scientists and engineers use the average instead of the median is explained. Several problematic aspects about regression and correlation are treated. The popular use of triplicate experiments in teaching and research laboratories is seen to have its origin in statistical confidence intervals. Nonparametric statistics and bootstrapping methods round out the discussion.
Resumo:
This study examined the influence of three polymerization cycles (1: heat cure - long cycle; 2: heat cure - short cycle; and 3: microwave activation) on the linear dimensions of three denture base resins, immediately after deflasking, and 30 days after storage in distilled water at 37± 2ºC. The acrylic resins used were: Clássico, Lucitone 550 and Acron MC. The first two resins were submitted to all three polymerization cycles, and the Acron MC resin was cured by microwave activation only. The samples had three marks, and dimensions of 65 mm in length, 10 mm in width and 3 mm in thickness. Twenty-one test specimens were fabricated for each combination of resin and cure cycle, and they were submitted to three linear dimensional evaluations for two positions (A and B). The changes were evaluated using a microscope. The results indicated that all acrylic resins, regardless of the cure cycle, showed increased linear dimension after 30 days of storage in water. The composition of the acrylic resin affected the results more than the cure cycles, and the conventional acrylic resin (Lucitone 550 and Clássico) cured by microwave activation presented similar results when compared with the resin specific for microwave activation.
Resumo:
BACKGROUND: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. METHODS: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the t test for unpaired comparisons between groups. The level of statistical significance was 5%. RESULTS: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. CONCLUSION: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.