852 resultados para Weighted average power tests
Resumo:
Electromagnetic coupling phenomena between overhead power transmission lines and other nearby structures are inevitable, especially in densely populated areas. The undesired effects resulting from this proximity are manifold and range from the establishment of hazardous potentials to the outbreak of alternate current corrosion phenomena. The study of this class of problems is necessary for ensuring security in the vicinities of the interaction zone and also to preserve the integrity of the equipment and of the devices there present. However, the complete modeling of this type of application requires the three- -dimensional representation of the region of interest and needs specific numerical methods for field computation. In this work, the modeling of problems arising from the flow of electrical currents in the ground (the so-called conductive coupling) will be addressed with the finite element method. Those resulting from the time variation of the electromagnetic fields (the so-called inductive coupling) will be considered as well, and they will be treated with the generalized PEEC (Partial Element Equivalent Circuit) method. More specifically, a special boundary condition on the electric potential is proposed for truncating the computational domain in the finite element analysis of conductive coupling problems, and a complete PEEC formulation for modeling inductive coupling problems is presented. Test configurations of increasing complexities are considered for validating the foregoing approaches. These works aim to provide a contribution to the modeling of this class of problems, which tend to become common with the expansion of power grids.
Resumo:
PCDD/F emissions from three light-duty diesel vehicles–two vans and a passenger car–have been measured in on-road conditions. We propose a new methodology for small vehicles: a sample of exhaust gas is collected by means of equipment based on United States Environmental Protection Agency (U.S. EPA) method 23A for stationary stack emissions. The concentrations of O2, CO, CO2, NO, NO2 and SO2 have also been measured. Six tests were carried out at 90-100 km/h on a route 100 km long. Two additional tests were done during the first 10 minutes and the following 60 minutes of the run to assess the effect of the engine temperature on PCDD/F emissions. The emission factors obtained for the vans varied from 1800 to 8400 pg I-TEQ/Nm3 for a 2004 model year van and 490-580 pg I-TEQ/Nm3 for a 2006 model year van. Regarding the passenger car, one run was done in the presence of a catalyst and another without, obtaining emission factors (330-880 pg I-TEQ/Nm3) comparable to those of the modern van. Two other tests were carried out on a power generator leading to emission factors ranging from 31 to 78 pg I-TEQ/Nm3. All the results are discussed and compared with literature.
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
A new Stata command called -mgof- is introduced. The command is used to compute distributional tests for discrete (categorical, multinomial) variables. Apart from classic large sample $\chi^2$-approximation tests based on Pearson's $X^2$, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read 1984), large sample tests for complex survey designs and exact tests for small samples are supported. The complex survey correction is based on the approach by Rao and Scott (1981) and parallels the survey design correction used for independence tests in -svy:tabulate-. The exact tests are computed using Monte Carlo methods or exhaustive enumeration. An exact Kolmogorov-Smirnov test for discrete data is also provided.
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
An inflatable drill-string packer was used at Site 839 to measure the bulk in-situ permeability within basalts cored in Hole 839B. The packer was inflated at two depths, 398.2 and 326.9 mbsf; all on-board information indicated that the packer mechanically closed off the borehole, although apparently the packer hydraulically sealed the borehole only at 398.2 mbsf. Two pulse tests were run at each depth, two constant-rate injection tests were run at the first set, and four were run at the second. Of these, only the constant-rate injection tests at the first set yielded a permeability, calculated as ranging from 1 to 5 * 10**-12 m**2. Pulse tests and constant-rate injection tests for the second set did not yield valid data. The measured permeability is an upper limit; if the packer leaked during the experiments, the basalt would be less permeable. In comparison, permeabilities measured at other Deep Sea Drilling Project and Ocean Drilling Program sites in pillow basalts and flows similar to those measured in Hole 839B are mainly about 10**-13 to 10**-14 m**2. Thus, if our results are valid, the basalts at Site 839 are more permeable than ocean-floor basalts investigated elsewhere. Based on other supporting evidence, we consider these results to be a valid measure of the permeability of the basalts. Temperature data and the geochemical and geotechnical properties of the drilled sediments all indicate that the site is strongly affected by fluid flow. The heat flow is very much less than expected in young oceanic basalts, probably a result of rapid fluid circulation through the crust. The geochemistry of pore fluids is similar to that of seawater, indicating seawater flow through the sediments, and sediments are uniformly underconsolidated for their burial depth, again indicating probable fluid flow. The basalts are highly vesicular. However, the vesicularity can only account for part of the average porosity measured on the neutron porosity well log; the remainder of the measured porosity is likely present as voids and fractures within and between thin-bedded basalts. Core samples, together with porosity, density, and resistivity well-log data show locations where the basalt section is thin bedded and probably has from 15% to 35% void and fracture porosity. Thus, the measured permeability seems reasonable with respect to the high measured porosity. Much of the fluid flow at Site 839 could be directed through highly porous and permeable zones within and between the basalt flows and in the sediment layer just above the basalt. Thus, the permeability measurements give an indication of where and how fluid flow may occur within the oceanic crust of the Lau Basin.
Resumo:
Formerly published under the title of Engine room chemistry.
Resumo:
Mode of access: Internet.
Resumo:
"A paper read before the National Association of Cotton Manufacturers at its Ninety-first Meeting, Manchester, Vermont, September 29, 1911."
Resumo:
The power output achieved at peak oxygen consumption (VO2 peak) and the time this power can be maintained (i.e., Tmax) have been used in prescribing high-intensity interval training. In this context, the present study examined temporal aspects of the VO2 response to exercise at the cycling power that output well trained cyclists achieve their VO2 peak (i.e., Pmax). Following a progressive exercise test to determine VO2 peak, 43 well trained male cyclists (M age = 25 years, SD = 6; M mass = 75 kg SD = 7; M VO2 peak = 64.8 ml(.)kg(1.)min(-1), SD = 5.2) performed two Tmax tests 1 week apart.1. Values expressed for each participant are means and standard deviations of these two tests. Participants achieved a mean VO2 peak during the Tmax test after 176 s (SD = 40; = 74% of Tmax, SD = 12) and maintained it for 66 s (SD = 39; M = 26% of Tmax, SD = 12). Additionally they obtained mean 95 % of VO2 peak after 147 s (SD = 31; M = 62 % of Tmax, SD = 8) and maintained it for 95 s (SD = 38; M = 38 % of Tmax, SD = 8). These results suggest that 60-70% of Tmax is an appropriate exercise duration for a population of well trained cyclists to attain VO2 peak during exercise at Pmax. However due to intraparticipant variability in the temporal aspects of the VO2 response to exercise at Pmax, future research is needed to examine whether individual high-intensity interval training programs for well trained endurance athletes might best be prescribed according to an athlete's individual VO2 response to exercise at Pmax.
Resumo:
Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to.. the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Resumo:
The distributions of eyes-closed resting electroencephalography (EEG) power spectra and their residuals were described and compared using classically averaged and adaptively aligned averaged spectra. Four minutes of eyes-closed resting EEG was available from 69 participants. Spectra were calculated with 0.5-Hz resolution and were analyzed at this level. It was shown that power in the individual 0.5 Hz frequency bins can be considered normally distributed when as few as three or four 2-second epochs of EEG are used in the average. A similar result holds for the residuals. Power at the peak Alpha frequency has quite different statistical behaviour to power at other frequencies and it is considered that power at peak Alpha represents a relatively individuated process that is best measured through aligned averaging. Previous analyses of contrasts in upper and lower alpha bands may be explained in terms of the variability or distribution of the peak Alpha frequency itself.
Resumo:
The country-product-dummy (CPD) method, originally proposed in Summers (1973), has recently been revisited in its weighted formulation to handle a variety of data related situations (Rao and Timmer, 2000, 2003; Heravi et al., 2001; Rao, 2001; Aten and Menezes, 2002; Heston and Aten, 2002; Deaton et al., 2004). The CPD method is also increasingly being used in the context of hedonic modelling instead of its original purpose of filling holes in Summers (1973). However, the CPD method is seen, among practitioners, as a black box due to its regression formulation. The main objective of the paper is to establish equivalence of purchasing power parities and international prices derived from the application of the weighted-CPD method with those arising out of the Rao-system for multilateral comparisons. A major implication of this result is that the weighted-CPD method would then be a natural method of aggregation at all levels of aggregation within the context of international comparisons.
Resumo:
We have redefined group membership of six southern galaxy groups in the local universe (mean cz < 2000 km s(-1)) based on new redshift measurements from our recently acquired Anglo-Australian Telescope 2dF spectra. For each group, we investigate member galaxy kinematics, substructure, luminosity functions and luminosity-weighted dynamics. Our calculations confirm that the group sizes, virial masses and luminosities cover the range expected for galaxy groups, except that the luminosity of NGC 4038 is boosted by the central starburst merger pair. We find that a combination of kinematical, substructural and dynamical techniques can reliably distinguish loose, unvirialized groups from compact, dynamically relaxed groups. Applying these techniques, we find that Dorado, NGC 4038 and NGC 4697 are unvirialized, whereas NGC 681, NGC 1400 and NGC 5084 are dynamically relaxed.