976 resultados para parallel admission algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underactuated cable-driven parallel robots (UACDPRs) shift a 6-degree-of-freedom end-effector (EE) with fewer than 6 cables. This thesis proposes a new automatic calibration technique that is applicable for under-actuated cable-driven parallel robots. The purpose of this work is to develop a method that uses free motion as an exciting trajectory for the acquisition of calibration data. The key point of this approach is to find a relationship between the unknown parameters to be calibrated (the lengths of the cables) and the parameters that could be measured by sensors (the swivel pulley angles measured by the encoders and roll-and-pitch angles measured by inclinometers on the platform). The equations involved are the geometrical-closure equations and the finite-difference velocity equations, solved using the least-squares algorithm. Simulations are performed on a parallel robot driven by 4 cables for validation. The final purpose of the calibration method is, still, the determination of the platform initial pose. As a consequence of underactuation, the EE is underconstrained and, for assigned cable lengths, the EE pose cannot be obtained by means of forward kinematics only. Hence, a direct-kinematics algorithm for a 4-cable UACDPR using redundant sensor measurements is proposed. The proposed method measures two orientation parameters of the EE besides cable lengths, in order to determine the other four pose variables, namely 3 position coordinates and one additional orientation parameter. Then, we study the performance of the direct-kinematics algorithm through the computation of the sensitivity of the direct-kinematics solution to measurement errors. Furthermore, position and orientation error upper limits are computed for bounded cable lengths errors resulting from the calibration procedure, and roll and pitch angles errors which are due to inclinometer inaccuracies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute phase response modifies high-density lipoprotein (HDL) into a dysfunctional particle that may favor oxidative/inflammatory stress and eNOS dysfunction. The present study investigated the impact of this phenomenon on patients presenting ST-elevation myocardial infarction (STEMI). Plasma was obtained from 180 consecutive patients within the first 24-h of onset of STEMI symptoms (D1) and after 5 days (D5). Nitrate/nitrite (NOx) and lipoproteins were isolated by gradient ultracentrifugation. The oxidizability of low-density lipoprotein incubated with HDL (HDLaoxLDL) and the HDL self-oxidizability (HDLautox) were measured after CuSO4 co-incubation. Anti-inflammatory activity of HDL was estimated by VCAM-1 secretion by human umbilical vein endothelial cells after incubation with TNF-α. Flow-mediated dilation (FMD) was assessed at the 30(th) day (D30) after STEMI. Among patients in the first tertile of admission HDL-Cholesterol (<33 mg/dL), the increment of NOx from D1 to D5 [6.7(2; 13) vs. 3.2(-3; 10) vs. 3.5(-3; 12); p = 0.001] and the FMD adjusted for multiple covariates [8.4(5; 11) vs 6.1(3; 10) vs. 5.2(3; 10); p = 0.001] were higher than in those in the second (33-42 mg/dL) or third (>42 mg/dL) tertiles, respectively. From D1 to D5, there was a decrease in HDL size (-6.3 ± 0.3%; p < 0.001) and particle number (-22.0 ± 0.6%; p < 0.001) as well as an increase in both HDLaoxLDL (33%(23); p < 0.001) and HDLautox (65%(25); p < 0.001). VCAM-1 secretion after TNF-a stimulation was reduced after co-incubation with HDL from healthy volunteers (-24%(33); p = 0.009), from MI patients at D1 (-23%(37); p = 0.015) and at D30 (-22%(24); p = 0.042) but not at D5 (p = 0.28). During STEMI, high HDL-cholesterol is associated with a greater decline in endothelial function. In parallel, structural and functional changes in HDL occur reducing its anti-inflammatory and anti-oxidant properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The network of HIV counseling and testing centers in São Paulo, Brazil is a major source of data used to build epidemiological profiles of the client population. We examined HIV-1 incidence from November 2000 to April 2001, comparing epidemiological and socio-behavioral data of recently-infected individuals with those with long-standing infection. A less sensitive ELISA was employed to identify recent infection. The overall incidence of HIV-1 infection was 0.53/100/year (95% CI: 0.31-0.85/100/year): 0.77/100/year for males (95% CI: 0.42-1.27/100/year) and 0.22/100/ year (95% CI: 0.05-0.59/100/year) for females. Overall HIV-1 prevalence was 3.2% (95% CI: 2.8-3.7%), being 4.0% among males (95% CI: 3.3-4.7%) and 2.1% among females (95% CI: 1.6-2.8%). Recent infections accounted for 15% of the total (95% CI: 10.2-20.8%). Recent infection correlated with being younger and male (p = 0.019). Therefore, recent infection was more common among younger males and older females.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Potentially Inappropriate Medications (PIM) use in elderly people may be responsible for the development of Adverse Drug Reaction (ADR) which, when severe, leads to hospital admissions. Objectives: to estimate the prevalence of elderly who had used PIM before being admitted to hospital and to identify the risk factors and the hospitalizations related to ADR arising from PIM. Methods: A descriptive and cross-sectional study was performed in the internal medicine ward of a teaching hospital (Brazil), in 2008. With the aid of a validated form, patients aged >= 60 years, with length of hospital stay >= 24 hours, were interviewed about drugs taken prior to the hospital admission and the complaints/reasons for hospitalization. Results: 19.1% (59/308) of older patients had taken PIM before hospital admission and in 4.9%; there were a causal relation between the PIM taken and the complaint reported. PIM responsible for admissions were: amiodarone, amitriptyline, cimetidine, clonidine, diazepam, digoxin, estrogen, fluoxetine, lorazepam, short-acting nifedipine and propranolol. 47.0% of the clinical manifestations of PIM-related ADR were: dizziness, fatigue, digoxin toxicity and erythema. Only polypharmacy was detected as a risk factor for the occurrence of ADR of PIM (p = 0.02). Conclusion: PIM use in elderly people is not a risk factor for ADR-related hospital admission. Probably, severe ADR, which lead to hospitalizations of older people, can be explained by idiosyncratic response or the predisposition of these patients to develop adverse drug events, whether or not drugs are classed as PIM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work develops a method for solving ordinary differential equations, that is, initial-value problems, with solutions approximated by using Legendre's polynomials. An iterative procedure for the adjustment of the polynomial coefficients is developed, based on the genetic algorithm. This procedure is applied to several examples providing comparisons between its results and the best polynomial fitting when numerical solutions by the traditional Runge-Kutta or Adams methods are available. The resulting algorithm provides reliable solutions even if the numerical solutions are not available, that is, when the mass matrix is singular or the equation produces unstable running processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prince Maximilian zu Wied's great exploration of coastal Brazil in 1815-1817 resulted in important collections of reptiles, amphibians, birds, and mammals, many of which were new species later described by Wied himself The bulk of his collection was purchased for the American Museum of Natural History in 1869, although many ""type specimens"" had disappeared earlier. Wied carefully identified his localities but did not designate type specimens or type localities, which are taxonomic concepts that were not yet established. Information and manuscript names on a fraction (17 species) of his Brazilian reptiles and amphibians were transmitted by Wied to Prof. Heinrich Rudolf Schinz at the University of Zurich. Schinz included these species (credited to their discoverer ""Princ. Max."") in the second volume of Das Thierreich ... (1822). Most are junior objective synonyms of names published by Wied. However, six of the 17 names used by Schinz predate Wied's own publications. Three were manuscript names never published by Wied because he determined the species to be previously known. (1) Lacerta vittata Schinz, 1822 (a nomen oblitum) = Lacerta striata sensu Wied (a misidentification, non Linnaeus nec sensu Merrem) = Kentropyx calcarata Spix, 1825, herein qualified as a nomen protectum. (2) Polychrus virescens Schinz, 1822 = Lacerta marmorata Linnaeus, 1758 (now Polychrus marmoratus). (3) Scincus cyanurus Schinz, 1822 (a nomen oblitum) = Gymnophthalmus quadrilineatus sensu Wied (a misidentification, non Linnaeus nec sensu Merrem) = Micrablepharus maximiliani (Reinhardt and Lutken, ""1861"" [1862]), herein qualified as a nomen protectum. Qualifying Scincus cyanurus Schinz, 1822, as a nomen oblitum also removes the problem of homonymy with the later-named Pacific skink Scincus cyanurus Lesson (= Emoia cyanura). The remaining three names used by Schinz are senior objective synonyms that take priority over Wied's names. (4) Bufo cinctus Schinz, 1822, is senior to Bufo cinctus Wied, 1823; both, however, are junior synonyms of Bufo crucifer Wied, 1821 = Chaunus crucifer (Wied). (5) Agama picta Schinz, 1822, is senior to Agama picta Wied, 1823, requiring a change of authorship for this poorly known species, to be known as Enyalius pictus (Schinz). (6) Lacerta cyanomelas Schinz, 1822, predates Teius cyanomelas Wied, 1824 (1822-1831) both nomina oblita. Wied's illustration and description shows cyanomelas as apparently conspecific with the recently described but already well-known Cnemidophorus nativo Rocha et al., 1997, which is the valid name because of its qualification herein as a nomen protectum. The preceding specific name cyanomelas (as corrected in an errata section) is misspelled several ways in different copies of Schinz's original description (""cyanom las,"" ""cyanomlas,"" and cyanom""). Loosening, separation, and final loss of the last three letters of movable type in the printing chase probably accounts for the variant misspellings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We proposed a connection admission control (CAC) to monitor the traffic in a multi-rate WDM optical network. The CAC searches for the shortest path connecting source and destination nodes, assigns wavelengths with enough bandwidth to serve the requests, supervises the traffic in the most required nodes, and if needed activates a reserved wavelength to release bandwidth according to traffic demand. We used a scale-free network topology, which includes highly connected nodes ( hubs), to enhance the monitoring procedure. Numerical results obtained from computational simulations show improved network performance evaluated in terms of blocking probability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.