960 resultados para Module Maximum
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
Warm-season grasses are economically important for cattle production in tropical regions and tools to aid in management and research on these forages would be highly beneficial both in research and the industry. This research was conducted to adapt the CROPGRO-Perennial Forage model to simulate growth of the tropical species guineagrass (Panicum maximum Jacq. cv. 'Tanzania') and to describe model adaptation for this species. To develop the CROPGRO parameters for this species, we began with values and relationships reported in the literature. Some parameters and relationships were calibrated by comparison with observed growth, development, dry matter accumulation, and partitioning during a 17-mo experiment with Tanzania guineagrass in Piracicaba, SP, Brazil. Compared with starting parameters for palisadegrass [Brachiaria brizantha (A. Rich.) Stapf. cv. 'Xaraes'], dormancy effects of the perennial forage model had to be minimized, partitioning to storage tissue or root decreased, and partitioning to leaf and stem increased to provide for more leaf and stem growth and less root. Parameters affecting specific leaf area and senescence of plant tissues were improved. After these changes were made to the model, biomass accumulation was better simulated, mean predicted herbage yield was 6576 kg ha(-1), averaged across 11 regrowth cycles of 35 (summer) or 63 d (winter), with a RMSE of 494 kg ha(-1) (Willmott's index of agreement d = 0.985, simulated/observed ratio = 1.014). The model also gave good predictions against an independent data set, with similar RMSE, ratio, and d. The results of the adaptation suggest that the CROPGRO model is an efficient tool to integrate physiological aspects of guineagrass and can be used to simulate growth.
Resumo:
This study performed an exploratory analysis of the anthropometrical and morphological muscle variables related to the one-repetition maximum (1RM) performance. In addition, the capacity of these variables to predict the force production was analyzed. 50 active males were submitted to the experimental procedures: vastus lateralis muscle biopsy, quadriceps magnetic resonance imaging, body mass assessment and 1RM test in the leg-press exercise. K-means cluster analysis was performed after obtaining the body mass, sum of the left and right quadriceps muscle cross-sectional area (Sigma CSA), percentage of the type II fibers and the 1RM performance. The number of clusters was defined a priori and then were labeled as high strength performance (HSP1RM) group and low strength performance (LSP1RM) group. Stepwise multiple regressions were performed by means of body mass, Sigma CSA, percentage of the type II fibers and clusters as predictors' variables and 1RM performance as response variable. The clusters mean +/- SD were: 292.8 +/- 52.1 kg, 84.7 +/- 17.9 kg, 19249.7 +/- 1645.5 mm(2) and 50.8 +/- 7.2% for the HSP1RM and 254.0 +/- 51.1 kg, 69.2 +/- 8.1 kg, 15483.1 +/- 1 104.8 mm(2) and 51.7 +/- 6.2 %, for the LSP1RM in the 1RM, body mass, Sigma CSA and muscle fiber type II percentage, respectively. The most important variable in the clusters division was the Sigma CSA. In addition, the Sigma CSA and muscle fiber type II percentage explained the variance in the 1RM performance (Adj R-2 = 0.35, p = 0.0001) for all participants and for the LSP1RM (Adj R-2 = 0.25, p = 0.002). For the HSP1RM, only the Sigma CSA was entered in the model and showed the highest capacity to explain the variance in the 1RM performance (Adj R-2 = 0.38, p = 0.01). As a conclusion, the muscle CSA was the most relevant variable to predict force production in individuals with no strength training background.
Resumo:
Background: Equations to predict maximum heart rate (HRmax) in heart failure (HF) patients receiving beta-adrenergic blocking (BB) agents do not consider the cause of HF. We determined equations to predict HRmax in patients with ischemic and nonischemic HF receiving BB therapy. Methods and Results: Using treadmill cardiopulmonary exercise testing, we studied HF patients receiving BB therapy being considered for transplantation from 1999 to 2010. Exclusions were pacemaker and/or implantable defibrillator, left ventricle ejection fraction (LVEF) >50%, peak respiratory exchange ratio (RER) <1.00, and Chagas disease. We used linear regression equations to predict HRmax based on age in ischemic and nonischemic patients. We analyzed 278 patients, aged 47 +/- 10 years, with ischemic (n = 75) and nonischemic (n = 203) HF. LVEF was 30.8 +/- 9.4% and 28.6 +/- 8.2% (P = .04), peak VO2 16.9 +/- 4.7 and 16.9 +/- 5.2 mL kg(-1) min(-1) (P = NS), and the HRmax 130.8 +/- 23.3 and 125.3 +/- 25.3 beats/min (P = .051) in ischemic and nonischemic patients, respectively. We devised the equation HRmax = 168 - 0.76 x age (R-2 = 0.095; P = .007) for ischemic HF patients, but there was no significant relationship between age and HRmax in nonischemic HF patients (R-2 = 0.006; P = NS). Conclusions: Our study suggests that equations to estimate HRmax should consider the cause of HF. (J Cardiac Fail 2012;18:831-836)
Resumo:
The thermal limits of individual animals were originally proposed as a link between animal physiology and thermal ecology. Although this link is valid in theory, the evaluation of physiological tolerances involves some problems that are the focus of this study. One rationale was that heating rates shall influence upper critical limits, so that ecological thermal limits need to consider experimental heating rates. In addition, if thermal limits are not surpassed in experiments, subsequent tests of the same individual should yield similar results or produce evidence of hardening. Finally, several non-controlled variables such as time under experimental conditions and procedures may affect results. To analyze these issues we conducted an integrative study of upper critical temperatures in a single species, the ant Atta sexdens rubropiosa, an animal model providing large numbers of individuals of diverse sizes but similar genetic makeup. Our specific aims were to test the 1) influence of heating rates in the experimental evaluation of upper critical temperature, 2) assumptions of absence of physical damage and reproducibility, and 3) sources of variance often overlooked in the thermal-limits literature; and 4) to introduce some experimental approaches that may help researchers to separate physiological and methodological issues. The upper thermal limits were influenced by both heating rates and body mass. In the latter case, the effect was physiological rather than methodological. The critical temperature decreased during subsequent tests performed on the same individual ants, even one week after the initial test. Accordingly, upper thermal limits may have been overestimated by our (and typical) protocols. Heating rates, body mass, procedures independent of temperature and other variables may affect the estimation of upper critical temperatures. Therefore, based on our data, we offer suggestions to enhance the quality of measurements, and offer recommendations to authors aiming to compile and analyze databases from the literature.
Resumo:
In this study we analyzed the phylogeographic pattern and historical demography of an endemic Atlantic forest (AF) bird, Basileuterus leucoblepharus, and test the influence of the last glacial maximum (LGM) on its population effective size using coalescent simulations. We address two main questions: (i) Does B. leucoblepharus present population genetic structure congruent with the patterns observed for other AF organisms? (ii) How did the LGM affect the effective population size of B. leucoblepharus? We sequenced 914 bp of the mitochondrial gene cytochrome b and 512 bp of the nuclear intron 5 of beta-fibrinogen of 62 individuals from 15 localities along the AF. Both molecular markers revealed no genetic structure in B. leucoblepharus. Neutrality tests based on both loci showed significant demographic expansion. The extended Bayesian skyline plot showed that the species seems to have experienced demographic expansion starting around 300,000 years ago, during the late Pleistocene. This date does not coincide with the LGM and the dynamics of population size showed stability during the LGM. To further test the effect of the LGM on this species, we simulated seven demographic scenarios to explore whether populations suffered specific bottlenecks. The scenarios most congruent with our data were population stability during the LGM with bottlenecks older than this period. This is the first example of an AF organism that does not show phylogeographic breaks caused by vicariant events associated to climate change and geotectonic activities in the Quaternary. Differential ecological, environmental tolerances and habitat requirements are possibly influencing the different evolutionary histories of these organisms. Our results show that the history of organism diversification in this megadiverse Neotropical forest is complex. Crown Copyright (c) 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.
Resumo:
Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor) buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction) model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984) [10]) are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.
Resumo:
OBJECTIVE: Define and compare numbers and types of occlusal contacts in maximum intercuspation. METHODS: The study consisted of clinical and photographic analysis of occlusal contacts in maximum intercuspation. Twenty-six Caucasian Brazilian subjects were selected before orthodontic treatment, 20 males and 6 females, with ages ranging between 12 and 18 years. The subjects were diagnosed and grouped as follows: 13 with Angle Class I malocclusion and 13 with Angle Class II Division 1 malocclusion. After analysis, the occlusal contacts were classified according to the established criteria as: tripodism, bipodism, monopodism (respectively, three, two or one contact point with the slope of the fossa); cuspid to a marginal ridge; cuspid to two marginal ridges; cuspid tip to opposite inclined plane; surface to surface; and edge to edge. RESULTS: The mean number of occlusal contacts per subject in Class I malocclusion was 43.38 and for Class II Division 1 malocclusion it was 44.38, this difference was not statistically significant (p>0.05). CONCLUSIONS: There is a variety of factors that influence the number of occlusal contacts between a Class I and a Class II, Division 1 malocclusion. There is no standardization of occlusal contact type according to the studied malocclusions. A proper selection of occlusal contact types such as cuspid to fossa or cuspid to marginal ridge and its location in the teeth should be individually defined according to the demands of each case. The existence of an adequate occlusal contact leads to a correct distribution of forces, promoting periodontal health.
Resumo:
To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the rst two moments of the lnA distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean lnA and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.
Resumo:
The most typical maximum tests for measuring leg muscle performance are the one-repetition maximum leg press test (1RMleg) and the isokinetic knee extension/flexion (IKEF) test. Nevertheless, their inter-correlations have not been well documented, mainly the predicted values of these evaluations. This correlational and regression analysis study involved 30 healthy young males aged 18-24y, who have performed both tests. Pearson's product moment correlation between 1RMleg and IKEF varied from 0.20 to 0.69 and the more exact predicted test was to 1RMleg (R2 = 0.71). The study showed correlations between 1RMleg and IKEF although these tests are different (isotonic vs. isokinetic) and provided further support for cross determination of 1RMleg and IKEF by linear and multiple linear regression analysis.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.
Resumo:
Miglioramento delle prestazioni del modello mono-compartimentale del maximum slope dovuto all'introduzione di sistemi per l'eliminazione degli outliers.
Resumo:
Il presente lavoro di tesi è stato svolto presso la DTU, Technical University of Denmark, nel Department of Energy Conversion and Storage, Riso Campus. Lo scopo del periodo di soggiorno estero è stato quello di caratterizzare appropriati moduli termoelettrici forniti da aziende del settore, utilizzando un opportuno apparato di caratterizzazione. Quest’ultimo è noto come “module test system” e, nello specifico, è stato fornito dalla PANCO GmbH, azienda anch’essa attiva nel campo delle tecnologie termoelettriche. Partendo da uno studio teorico dei fenomeni fisici interessati (effetto Seebeck per la produzione di potenza termoelettrica), si è cercato in seguito di analizzare le principali caratteristiche, ed elementi, del “module test system”. Successivamente a questa prima fase di analisi, sono stati condotti esperimenti che, con l’aiuto di modelli computazionali implementati attraverso il software Comsol Multiphysics, hanno permesso di studiare l’affidabilità del sistema di caratterizzazione. Infine, una volta acquisite le basi necessarie ad una corretta comprensione dei fenomeni fisici e delle caratteristiche relative alla strumentazione, sono stati analizzati moduli termoelettrici di tipo commerciale. In particolare, sono stati estrapolati dati quali correnti, tensioni, gradienti di temperatura, che hanno permesso di ricavare flussi termici, efficienze, e potenze che caratterizzano il modulo in questione durante le condizioni di funzionamento. I risultati ottenuti sono stati successivamente comparati con dati forniti dal produttore, presenti sul catalogo.