947 resultados para common method variance
Resumo:
When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.
Resumo:
TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.
Resumo:
Six common bean cultivars were crossed in diallel and the segregant populations were assessed in the F2 and F3 generations to compare methodologies for parental selection in a breeding program based on hybridization. The cultivars involved in the diallel were A 114, A 77, ESAL 686, Milionário, Carioca, and Flor de Mayo. The segregant F2 and F3 generations were assessed on the experimental campus of the Universidade Federal de Larvas, in July 1994. It was found that the cultivars differed in their general combining ability (GCA). Flor de Mayo, which belongs to the Durango race, had the largest positive GCA estimate for grain field, and the cultivars from the Mesoamerican race, Milionário and A 114, the smallest GCA estimates. For flowering, the cultivar that most contributed to reduced plant cycle was ESAL 686. There was agreement among the results obtained from the diallel and the estimates of the parameter m + a of the populations. However, it was evident that the estimate of genetic variance of the populations should be considered as a condition to identify the hybrid population that will produce a line with high performance.
Resumo:
The aortic-pulmonary regions (APR) of seven adult marmosets (Callithrix jacchus) and the region of the right subclavian artery of a further three marmosets were diffusion-fixed with 10% buffered formol-saline solution. In both regions serial 5-µm sections were cut and stained by the Martius yellow, brilliant crystal scarlet and soluble blue method. Presumptive thoracic paraganglionic (PTP) tissue was only observed in the APR. PTP tissue was composed of small groups of cells that varied in size and number. The distribution of the groups of cells was extremely variable, so much so that it would be misleading to attempt to classify their position; they were not circumscribed by a connective tissue capsule, but were always related to the thoracic branches of the left vagus nerve. The cells lay in loose areolar tissue characteristic of this part of the mediastinum and received their blood supply from small adjacent connective tissue arterioles. Unlike the paraganglionic tissue found in the carotid body the cells in the thorax did not appear to have a profuse capillary blood supply. There was, however, a close cellular-neural relationship. The cells, 10-15 µm in diameter, were oval or rounded in appearance and possessed a central nucleus and clear cytoplasm. No evidence was found that these cells possessed a 'companion' cell reminiscent of the arrangement of type 1 and type 2 cells in the carotid body. In conclusion, we found evidence of presumed paraganglionic tissue in the APR of the marmoset which, however, did not show the characteristic histological features of the aortic body chemoreceptors that have been described in some non-primate mammals. A survey of the mediastina of other non-human primates is required to establish whether this finding is atypical for these animals.
Resumo:
Permanent bilateral occlusion of the common carotid arteries (2VO) in the rat has been established as a valid experimental model to investigate the effects of chronic cerebral hypoperfusion on cognitive function and neurodegenerative processes. Our aim was to compare the cognitive and morphological outcomes following the standard 2VO procedure, in which there is concomitant artery ligation, with those of a modified protocol, with a 1-week interval between artery occlusions to avoid an abrupt reduction of cerebral blood flow, as assessed by animal performance in the water maze and damage extension to the hippocampus and striatum. Male Wistar rats (N = 47) aged 3 months were subjected to chronic hypoperfusion by permanent bilateral ligation of the common carotid arteries using either the standard or the modified protocol, with the right carotid being the first to be occluded. Three months after the surgical procedure, rat performance in the water maze was assessed to investigate long-term effects on spatial learning and memory and their brains were processed in order to estimate hippocampal volume and striatal area. Both groups of hypoperfused rats showed deficits in reference (F(8,172) = 7.0951, P < 0.00001) and working spatial memory [2nd (F(2,44) = 7.6884, P < 0.001), 3rd (F(2,44) = 21.481, P < 0.00001) and 4th trials (F(2,44) = 28.620, P < 0.0001)]; however, no evidence of tissue atrophy was found in the brain structures studied. Despite similar behavioral and morphological outcomes, the rats submitted to the modified protocol showed a significant increase in survival rate, during the 3 months of the experiment (P < 0.02).
Resumo:
The soluble and insoluble cotyledon (SPF-Co and IPF-Co) and tegument (SPF-Te and IPF-Te) cell wall polymer fractions of common beans (Phaseolus vulgaris) were isolated using a chemical-enzymatic method. The sugar composition showed that SPF-Co was constituted of 38.6% arabinose, 23.4% uronic acids, 12.7% galactose, 11.2% xylose, 6.4% mannose and 6.1% glucose, probably derived from slightly branched and weakly bound polymers. The IPF-Co was fractionated with chelating agent (CDTA) and with increasing concentrations of NaOH. The bulk of the cell wall polymers (29.4%) were extracted with 4.0M NaOH and this fraction contained mainly arabinose (55.0%), uronic acid (18.9%), glucose (10.7%), xylose (10.3%) and galactose (3.4%). About 8.7% and 10.6% of the polymers were solubilised with CDTA and 0.01M NaOH respectively and were constituted of arabinose (52.0 and 45.9%), uronic acids (25.8 and 29.8%), xylose (9.6 and 10.2%), galactose (6.1 and 3.9%) and glucose (6.5 and 3.8%). The cell wall polymers were also constituted of small amounts (5.6 and 7.2%) of cellulose (CEL) and of non-extractable cell wall polymers (NECW). About 16.8% and 17.2% of the polymers were solubilised with 0.5 and 1.0M NaOH and contained, respectively, 92.1 and 90.7% of glucose derived from starch (IST). The neutral sugar and polymers solubilization profiles showed that weakly bound pectins are present mainly in SPF-Co (water-soluble), CDTA and 0.01-0.1M NaOH soluble fractions. Less soluble, highly cross-linked pectins were solubilised with 4.0M NaOH. This pectin is arabinose-rich, probably highly branched and has a higher molecular weight than the pectin present in SPF-Co, CDTA and 0.01-0.1M NaOH fractions.
Resumo:
AbstractThis study aimed to evaluate the effect of the distillation time and the sample mass on the total SO2 content in integral passion fruit juice (Passiflora sp). For the SO2 analysis, a modified version of the Monier-Williams method was used. In this experiment, the distillation time and the sample mass were reduced to half of the values proposed in the original method. The analyses were performed in triplicate for each distilling time x sample mass binomial, making a total of 12 tests, which were performed on the same day. The significance of the effects of the different distillation times and sample mass were evaluated by applying one-factor analysis of variance (ANOVA). For a 95% confidence limit, it was found that the proposed amendments to the distillation time, sample mass, and the interaction between distilling time x sample mass were not significant (p > 0.05) in determining the SO2 content in passion fruit juice. In view of the results that were obtained it was concluded that for integral passion fruit juice it was possible to reduce the distillation time and the sample mass in determining the SO2 content by the Monier-Williams method without affecting the result.
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.
Resumo:
Preclinical and clinical tooth important and significant aspect of preparation is an a dental student's education. The associated procedures rely heavily on the development of particular psychomotor skills. The most common format of instruction and evaluation in tooth preparation at many Dental Faculties, emphasizes the product (tooth preparation) and associates performance with characteristics of this product. This integrated study examines which skills should be developed and how a course of instruction can best be structured to develop the necessary skills. The skills which are identified are those necessary for tooth preparation are selected from a psychomotor taxonomy. The purpose of evaluating these skills is identified. Behavioral objectives are set for student performance and the advisability of establishing standards of performance is examined. After reviewing studies related to learning strategy for dental psychomotor the most suitable tasks as well as articles on instructor effectiveness a model is proposed. A pilot project at the University of Toronto, based on this proposed model is described. The paper concludes wi th a discussion of the implications of this proposed model.
Resumo:
We developed the concept of split-'t to deal with the large molecules (in terms of the number of electrons and nuclear charge Z). This naturally leads to partitioning the local energy into components due to each electron shell. The minimization of the variation of the valence shell local energy is used to optimize a simple two parameter CuH wave function. Molecular properties (spectroscopic constants and the dipole moment) are calculated for the optimized and nearly optimized wave functions using the Variational Quantum Monte Carlo method. Our best results are comparable to those from the single and double configuration interaction (SDCI) method.
Resumo:
It is common practice to initiate supplemental feeding in newborns if body weight decreases by 7-10% in the first few days after birth (7-10% rule). Standard hospital procedure is to initiate intravenous therapy once a woman is admitted to give birth. However, little is known about the relationship between intrapartum intravenous therapy and the amount of weight loss in the newborn. The present research was undertaken in order to determine what factors contribute to weight loss in a newborn, and to examine the relationship between the practice of intravenous intrapartum therapy and the extent of weight loss post-birth. Using a cross-sectional design with a systematic random sample of 100 mother-baby dyads, we examined properties of delivery that have the potential to impact weight loss in the newborn, including method of delivery, parity, duration of labour, volume of intravenous therapy, feeding method, and birth attendant. This study indicated that the volume of intravenous therapy and method of delivery are significant predictors of weight loss in the newborn (R2=15.5, p<0.01). ROC curve analysis identified an intravenous volume cut-point of 1225 ml that would elicit a high measure of sensitivity (91.3%), and demonstrated significant Kappa agreement (p<0.01) with excess newborn weight loss. It was concluded that infusion of intravenous therapy and natural birth delivery are discriminant factors that influence excess weight loss in newborn infants. Acknowledgement of these factors should be considered in clinical practice.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Les logiciels utilisés sont Splus et R.
Resumo:
Bien que les champignons soient régulièrement utilisés comme modèle d'étude des systèmes eucaryotes, leurs relations phylogénétiques soulèvent encore des questions controversées. Parmi celles-ci, la classification des zygomycètes reste inconsistante. Ils sont potentiellement paraphylétiques, i.e. regroupent de lignées fongiques non directement affiliées. La position phylogénétique du genre Schizosaccharomyces est aussi controversée: appartient-il aux Taphrinomycotina (précédemment connus comme archiascomycetes) comme prédit par l'analyse de gènes nucléaires, ou est-il plutôt relié aux Saccharomycotina (levures bourgeonnantes) tel que le suggère la phylogénie mitochondriale? Une autre question concerne la position phylogénétique des nucléariides, un groupe d'eucaryotes amiboïdes que l'on suppose étroitement relié aux champignons. Des analyses multi-gènes réalisées antérieurement n'ont pu conclure, étant donné le choix d'un nombre réduit de taxons et l'utilisation de six gènes nucléaires seulement. Nous avons abordé ces questions par le biais d'inférences phylogénétiques et tests statistiques appliqués à des assemblages de données phylogénomiques nucléaires et mitochondriales. D'après nos résultats, les zygomycètes sont paraphylétiques (Chapitre 2) bien que le signal phylogénétique issu du jeu de données mitochondriales disponibles est insuffisant pour résoudre l'ordre de cet embranchement avec une confiance statistique significative. Dans le Chapitre 3, nous montrons à l'aide d'un jeu de données nucléaires important (plus de cent protéines) et avec supports statistiques concluants, que le genre Schizosaccharomyces appartient aux Taphrinomycotina. De plus, nous démontrons que le regroupement conflictuel des Schizosaccharomyces avec les Saccharomycotina, venant des données mitochondriales, est le résultat d'un type d'erreur phylogénétique connu: l'attraction des longues branches (ALB), un artéfact menant au regroupement d'espèces dont le taux d'évolution rapide n'est pas représentatif de leur véritable position dans l'arbre phylogénétique. Dans le Chapitre 4, en utilisant encore un important jeu de données nucléaires, nous démontrons avec support statistique significatif que les nucleariides constituent le groupe lié de plus près aux champignons. Nous confirmons aussi la paraphylie des zygomycètes traditionnels tel que suggéré précédemment, avec support statistique significatif, bien que ne pouvant placer tous les membres du groupe avec confiance. Nos résultats remettent en cause des aspects d'une récente reclassification taxonomique des zygomycètes et de leurs voisins, les chytridiomycètes. Contrer ou minimiser les artéfacts phylogénétiques telle l'attraction des longues branches (ALB) constitue une question récurrente majeure. Dans ce sens, nous avons développé une nouvelle méthode (Chapitre 5) qui identifie et élimine dans une séquence les sites présentant une grande variation du taux d'évolution (sites fortement hétérotaches - sites HH); ces sites sont connus comme contribuant significativement au phénomène d'ALB. Notre méthode est basée sur un test de rapport de vraisemblance (likelihood ratio test, LRT). Deux jeux de données publiés précédemment sont utilisés pour démontrer que le retrait graduel des sites HH chez les espèces à évolution accélérée (sensibles à l'ALB) augmente significativement le support pour la topologie « vraie » attendue, et ce, de façon plus efficace comparée à d'autres méthodes publiées de retrait de sites de séquences. Néanmoins, et de façon générale, la manipulation de données préalable à l'analyse est loin d’être idéale. Les développements futurs devront viser l'intégration de l'identification et la pondération des sites HH au processus d'inférence phylogénétique lui-même.
Resumo:
Les modèles de séries chronologiques avec variances conditionnellement hétéroscédastiques sont devenus quasi incontournables afin de modéliser les séries chronologiques dans le contexte des données financières. Dans beaucoup d'applications, vérifier l'existence d'une relation entre deux séries chronologiques représente un enjeu important. Dans ce mémoire, nous généralisons dans plusieurs directions et dans un cadre multivarié, la procédure dévéloppée par Cheung et Ng (1996) conçue pour examiner la causalité en variance dans le cas de deux séries univariées. Reposant sur le travail de El Himdi et Roy (1997) et Duchesne (2004), nous proposons un test basé sur les matrices de corrélation croisée des résidus standardisés carrés et des produits croisés de ces résidus. Sous l'hypothèse nulle de l'absence de causalité en variance, nous établissons que les statistiques de test convergent en distribution vers des variables aléatoires khi-carrées. Dans une deuxième approche, nous définissons comme dans Ling et Li (1997) une transformation des résidus pour chaque série résiduelle vectorielle. Les statistiques de test sont construites à partir des corrélations croisées de ces résidus transformés. Dans les deux approches, des statistiques de test pour les délais individuels sont proposées ainsi que des tests de type portemanteau. Cette méthodologie est également utilisée pour déterminer la direction de la causalité en variance. Les résultats de simulation montrent que les tests proposés offrent des propriétés empiriques satisfaisantes. Une application avec des données réelles est également présentée afin d'illustrer les méthodes