960 resultados para Separating of variables


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data variability analysis has been the focus of a number of studies seeking to capture differences of patterns generated by biological systems. Although several studies related to gait employ the analysis of variability in their observations, we noticed a lack of such information for subjects with unilateral coxarthrosis undergoing total hip arthroplasty (THA). To tackle this deficiency of information, we conducted a study of the gait on a treadmill with10 healthy subjects (30.7 ± 6.75 years old) from G1 and 24 subjects (65 ± 8.5 years old) with unilateral THA from G2. Thus, by means of two inertial measurement units (IMUs) positioned in the pelvis, we have developed a detection method of the step and stride for calculating these intervals and extract the signal characteristics. The variability analysis (coefficient of variation) was performed, taking into consideration the extracted features and the step and stride times. The average and the 95% confidence interval estimate for the average of the step and stride times to each group were in agreement with literature. The mean coefficient of variation for the step and stride times was calculated and compared among groups by the Kruskal-Wallis test with 95% confidence interval. Each component X, Y and Z of the two IMUs (accelerometer, magnetometer and gyroscope) corresponded to a variable. The resultants of each sensor, the linear velocity (accelerometers) and the instantaneous angular displacement (gyroscopes) completed the set of variables. The characteristics were extracted from the signals of these variables to check the variability in the G1 and G2 groups . There were significant differences (p <0.05) between G1 and G2 for the average of the step and stride times. The variability of the step and stride, as well as the variability of all other evaluated characteristics were higher for the group G2 (p <0.05). The method proposed in this study proved to be suitable for the measuring of variability of biomechanical parameters related to the extracted features. All the extracted features categorized the groups. The G2 group showed greater variability, so it is possible that the age and the pathological condition of the hip both contributed to this result.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Faced with an agribusiness expansion scenario and the increase in fertilizer consumption due to the exponential growth of the population, it is necessary to make better use of existing reserves, by obtaining products of better quality and in adequate quantities to meet demand national. In Tapira Mining Complex, Vale Fertilizantes, the phosphate concentrate is produced with content of 35.0% P2O5 from ore with content of about 8.0% P2O5, which are intended to supply Complex Industrial Uberaba and Araxá Minero Chemical Complex for the production of fertilizers. The industrial flotation step responsible for the recovery of P2O5 and hence the viability of the business is divided into the crumbly, grainy and ultrathin circuits, and, friable and granular concentrate comprise the conventional concentrated. Today only 14.7% of the mass which feeds the mill product becomes, the remainder being considered losses in the process, and the larger mass losses are located in the waste of flotation, representing 42.3%. From 2012 to 2014, the daily global mass recovery processing plants varied from 12.4 to 15.9% while the daily metallurgical recovery of P2O5 from 48.7 to 82.4%. By the degree of variability, it appears that the plant operated under different conditions. Seen this, this study aimed to analyze the influence of operational and process variables in P2O5 mass and metallurgical recoveries of industrial flotation circuits of grainy, crumbly and ultrathin. And besides was made an analysis of the effect of ore variables, as degrees, hardnesse and the ore front 02 percentage, in global recoveries of processing plant and the effect of dosages of reagents in the recoveries obtained from the bench flotation using the experimental design methodology. All work was performed using the historical database of Vale Fertilizantes of Tapira-MG, where all independent variables were dimensionless as the experimental range used. To make the statistical analysis it used the response surface technique and the values of the independent variables that maximize recoveries were found by canonical analysis. In the study of industrial flotation circuit crispy were obtained from 41.3% mass recovery and 91.3% metallurgical recovery P2O5, good values for the circuit, and the highest recoveries occur for solids concentration of the new flotation power between 45 and 50%, which values are assigned to the residence time of the pulp in cells and industrial flotation columns. The greater the number of ore heaps resumed on the higher the mass recovery, but in this scenario flotation becomes unstable because there is enormous weight variation in the feed. Higher recoveries are obtained for mass depressant dosage exceeding 120 g / t for synthetic collector dosage of 11.6%. In the study of industrial flotation circuit of the granulate were obtained 28.3% to 79.4% mass recovery and metallurgical recovery of P2O5 being considered good values for the circuit. Higher recoveries are obtained when the front ore 02 percentage is above 90%, because the ore of this front have more clear apatite. Likewise recoveries increase when the level of pulp rougher step is highest due to the high mass of circulating step receives loads. In the analysis of industrial flotation circuit of the ultrafine were obtained 23.95% of mass recovery, and the same is maximized to depressant dosage and the top collector 420 and 300 g / t, respectively. The analysis of the influence of variables ore, it was observed that higher recoveries are obtained for ores with P2O5 content above 8.0%, Fe2O3 content in the order of 28% forward and 02 of ore percentage of 83%. Hard ore percentage has strong influence on recoveries due to mass division in the circuit that is linked to this variable. However, the hard ore percentage that maximizes recoveries was very close to the design capacity of the processing plant, which is 20%. Finally, the study of the bench flotation, has noted that in friable and granular circuits the highest recoveries are achieved for a collector dosage exceeding 250 g / t and the simultaneous increase of collector dosage and synthetic collector percentage contributes to the increase recovery in the flotation, but this scenario is suitable to produce a concentrate poorer in terms of P2O5 content, showing that higher recovery is not always the ideal scenario. Thus, the results show the values of variables that provide higher recoveries in the flotation and hence lower losses in the waste.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Soybean crop is substantially important for both Brazilian and international markets. A relevant disease that affects soybeans is powdery mildew, caused by fungus Erysiphe diffusa. The objective of this master’s thesis was to analyze physiological changes produced by fungicides in two greenhouse-grown soybean genotypes (i.e., Anta 8500 RR and BRS Santa Cruz RR) naturally infected with powdery mildew. A complete randomized block design was used with six replications in a 2x5 factorial arrangement. Treatments consisted of applications of Azoxystrobin, Biofac (fermented solution of Penicillium sp.), Carbendazim or Picoxystrobin fungicides, and a Control (no fungicide application). Three applications were performed in the experimental period, and each eventually represented a period of data collection. Gas exchanges, chlorophyll content, fluorescence of chlorophyll a and disease severity were measured twice a week. Dry grain mass production was measured at the end of the experiment. Areas under progression curve of variables were submitted to both ANOVA and Tukey’s test at 5% significance. Treatments Azoxystrobin, Biofac and Picoxystrobin had higher photosynthetic rates than Control in the second period, with genotype Anta having higher rate than Santa Cruz. Biofac had higher transpiration rate than Control in the second period, while Biofac and Picoxystrobin had higher figures in Santa Cruz in the third period. Carbendazim had greater stomatal conductance in Anta, whilst Azoxystrobin, Biofac and Picoxystrobin had greater values than Carbendazim in Santa Cruz. Biofac and Picoxystrobin had greater intercellular CO2 concentration in Santa Cruz. Azoxystrobin and Picoxystrobin had greater instantaneous water use efficiency than Control, with Anta being more efficient than Santa Cruz. Biofac and Picoxystrobin had greater intrinsic water use efficiency in Anta, while Carbendazim increased efficiency in Santa Cruz. Azoxystrobin, Biofac and Picoxystrobin had greater carboxylation efficiency than Control in the second period, with Anta being more efficient than Santa Cruz. Azoxystrobin and Biofac had greater contents of chlorophylls a, b and a+b than Control in the second period. Azoxystrobin had greater effective quantum yield than Control and Picoxystrobin. All treatments faced increasing disease severity over time, with Anta being less resistant than Santa Cruz. As for production, data showed that: (1) Santa Cruz was more productive than Anta, having the greatest dry grain mass with Carbendazim, and (2) Anta’s lower disease severity did not translate into higher productions. In conclusion, strobilurins (Azoxystrobin and Picoxystrobin) and Biofac performed similarly as to their physiological effects on soybeans; however, these effects did not lead to increased dry grain mass by the end of the experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we investigate voter volatility and analyze the causes and motives of switching vote intentions. We test two main sets of variables linked to volatility in literature; political sophistication and ‘political (dis)satisfaction’. Results show that voters with low levels of political efficacy tend to switch more often, both within a campaign and between elections. In the analysis we differentiate between campaign volatility and inter-election volatility and by doing so show that the dynamics of a campaign have a profound impact on volatility. The campaign period is when the lowly sophisticated switch their vote intention. Those with higher levels of interest in politics have switched their intention before the campaign has started. The data for this analysis are from the three wave PartiRep Belgian Election Study (2009).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El Análisis de Correspondencias Múltiples (ACM), recurso metodológico utilizado por Bourdieu y su equipo en un nivel avanzado de síntesis teórico-empírica, constituye una herramienta fundamental para la construcción analítica de espacios relacionales. Permite posicionar relacionalmente unidades de análisis en función de un conjunto determinado de variables y plasmar la multiplicidad resultante tanto gráfica como analíticamente. Comenzando por una reflexión general sobre las potencialidades de la herramienta, este artículo analiza la configuración del espacio universitario privado en Argentina (1955-1983) a partir del ACM, que permitió determinar las relaciones de homología y principios de diferenciación existentes entre las instituciones que lo componen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente trabajo analiza la evolución del señorío eclesiástico en el largo plazo, para sumar al conocimiento de las formas señoriales de la Extremadura leonesa. Consideramos específicamente el caso del cabildo catedralicio salmantino entre los siglos XII y XV. Buscamos demostrar que no poseyó idéntica estructura durante todo el período y que sus transformaciones se explican por una compleja conjunción de variables. Dichas transformaciones incidieron sobre las estructuras sociales del agro, en especial sobre el desarrollo de procesos de diferenciación social campesina. Demostramos que la forma concreta en que se realizaba la renta podía alterar las estructuras sociales de las comunidades y que el desarrollo de las relaciones sociales asalariadas se encontraba muy vinculado a las coyunturas económicas y a las posibilidades y límites de la gestión señorial. Finalmente, ponemos de relieve que la transformación social no siempre fue irreversible y que su consolidación dependió de la incapacidad de los señores de ejercer sus poderes políticos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A field experiment was conducted on a real continuous steel Gerber-truss bridge with artificial damage applied. This article summarizes the results of the experiment for bridge damage detection utilizing traffic-induced vibrations. It investigates the sensitivities of a number of quantities to bridge damage including the identified modal parameters and their statistical patterns, Nair’s damage indicator and its statistical pattern and different sets of measurement points. The modal parameters are identified by autoregressive time-series models. The decision on bridge health condition is made and the sensitivity of variables is evaluated with the aid of the Mahalanobis–Taguchi system, a multivariate pattern recognition tool. Several observations are made as follows. For the modal parameters, although bridge damage detection can be achieved by performing Mahalanobis–Taguchi system on certain modal parameters of certain sets of measurement points, difficulties were faced in subjective selection of meaningful bridge modes and low sensitivity of the statistical pattern of the modal parameters to damage. For Nair’s damage indicator, bridge damage detection could be achieved by performing Mahalanobis–Taguchi system on Nair’s damage indicators of most sets of measurement points. As a damage indicator, Nair’s damage indicator was superior to the modal parameters. Three main advantages were observed: it does not require any subjective decision in calculating Nair’s damage indicator, thus potential human errors can be prevented and an automatic detection task can be achieved; its statistical pattern has high sensitivity to damage and, finally, it is flexible regarding the choice of sets of measurement points.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PEDRINI, Aldomar; SZOKOLAY, Steven. Recomendações para o desenvolvimento de uma ferramenta de suporte às primeiras decisões projetuais visando ao desempenho energético de edificações de escritório em clima quente. Ambiente Construído, Porto Alegre, v. 5, n. 1, p.39-54, jan./mar. 2005. Trimestral. Disponível em: . Acesso em: 04 out. 2010.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of the Design by Analysis (DBA) route is a modern trend in pressure vessel and piping international codes in mechanical engineering. However, to apply the DBA to structures under variable mechanical and thermal loads, it is necessary to assure that the plastic collapse modes, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. The tool available to achieve this target is the shakedown theory. Unfortunately, the practical numerical applications of the shakedown theory result in very large nonlinear optimization problems with nonlinear constraints. Precise, robust and efficient algorithms and finite elements to solve this problem in finite dimension has been a more recent achievements. However, to solve real problems in an industrial level, it is necessary also to consider more realistic material properties as well as to accomplish 3D analysis. Limited kinematic hardening, is a typical property of the usual steels and it should be considered in realistic applications. In this paper, a new finite element with internal thermodynamical variables to model kinematic hardening materials is developed and tested. This element is a mixed ten nodes tetrahedron and through an appropriate change of variables is possible to embed it in a shakedown analysis software developed by Zouain and co-workers for elastic ideally-plastic materials, and then use it to perform 3D shakedown analysis in cases with limited kinematic hardening materials

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To investigate whether the alterations of the diverted colon segment mucosa, evidenced in fecal colitis, would be able to alter Bacterial Translocation (BT). Methods: Sixty-two Wistar male rats ranging from 220 to 320 grams of weight, were divided in two groups: A (Colostomy) and B (Control), with 31 animals each one. In group A, all animals underwent end colostomy, one stoma, in ascending colon; and in the 70th POD was injected in five rats, by rectal route – diverted segment - 2ml of a 0.9% saline solution in animals (A1 subgroup); in eight it was inoculated, by rectal route, 2ml of a solution containing Escherichia coli ATCC 25922 (American Type Culture Collection), in a concentration of 108 Colony Forming Unit for milliliters (CFU/ml) - A2 Subgroup; in ten animals the same solution of E. coli was inoculated, in a concentration of 1011 CFU/ml (A3 Subgroup); and in eight it was collected part of the mucus found in the diverted distal colonic segment for neutral sugars and total proteins dosage (A4 subgroup). The animals from the group B underwent the same procedures of group A, but with differences in the colostomy confection. In rats from subgroups A1, A2, A3, B1, B2, and B3 2ml of blood were aspirated from the heart, and fragments from mesenteric lymphatic nodule, liver, spleen, lung and kidney taken for microbiological analysis, after their death. This analysis consisted of evidencing the presence of E. coli ATCC 25922 CFU. Mann-Whitney and ANOVA Tests were applied as analytic techniques for association of variables. Results: The occurrence of BT was evidenced only in those animals in which inoculated concentration of E. coli ATCC 25922, reached levels of 1011CFU/ml, i.e. in Subgroups A3 and B3, although, being significantly greater (80%) in those animals without colostomy (subgroup B3) when compared to the ones with colostomy (20%) from the subgroup A3 (P <0.05). Lung, liver and mesenteric lymphatic nodules were the tissues with larger percentile of bacterial recovery, so much in subgroup A3, as in B3. Blood culture was considered positive in 60% of the animals from subgroup B3 and in 10% of those from subgroup A3 (p <0.05). There was greater concentration of neutral sugars, in subgroup A4 - mean 27.3mg/ml -, than in subgroup B4 - mean 8.4mg/ml - (P <0.05). Conclusion: The modifications in the architecture of intestinal mucosa in colitis following fecal diversion can cause alterations in the intestinal barrier, but it does not necessarily lead to an increased frequency of BT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies have been undertaken or attempted by industry and academe to address the need for lodging industry carbon benchmarking. However, these studies have focused on normalizing resource use with the goal of rating or comparing all properties based on multivariate regression according to an industry-wide set of variables, with the result that data sets for analysis were limited. This approach is backward, because practical hotel industry benchmarking must first be undertaken within a specific location and segment.1 Therefore, the CHSB study’s goal is to build a representative database providing raw benchmarks as a base for industry comparisons.2 These results are presented in the CHSB2016 Index, through which a user can obtain the range of benchmarks for energy consumption, water consumption, and greenhouse gas emissions for hotels within specific segments and geographic locations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The definition of the boundaries of the firms is subject that has occupied the organizational theorists long ago, being the seminal work of Coase (1937) indicated as the trigger for one theoretical evolution, with emphasis on governance structures, which led to a modern theory of incomplete contracts. The Transaction Cost Economics (TCE) and Agency Theory arise within this evolution, being widely used in studies related to the theme. Empirically, data envelopment analysis (DEA) has established itself as a suitable tool for analysis of efficiency. Although TCE argues that specific assets must be internalized, recent studies outside the mainstream of theory show that, often, firms may decide, for various reasons, hire them on the market. Researches on transaction costs face with the unavailability of information and methodological difficulties in measuring their critical variables. There`s still the need for further methodological deepening. The theoretical framework includes classic works of TCE and Agency Theory, but also more recent works, outside the mainstream of TCE, which warn about the existence of strategies in use of specific assets that aren`t necessarily aligned with the classical ideas of TCE. The Brazilian oil industry is the focus of this thesis, that aimed to evaluate the efficiency of contracts involving high specificity service outsourced by Petrobras. In order to this, we made the categorization of outsourced services in terms of specificity, as well the description of services with higher specificity. Then, we verified the existence of relationship between the specificity of services and a number of variables, being found divergent results than those that are preached by the mainstream of TCE. Then, we designed a DEA model to analyze the efficiency in the use of onshore drilling rigs, identified among the services of highest specificity. The next step was the application of the model to evaluate the performance of drilling rigs contracts. Finally, we verified the existence of relationship between the efficiency of contracts and a number of variables, being found, again, results not consistent with the theory mainstream. Regarding to analyze of efficiency of drilling rigs contracts, the model developed is compatible with what is found in academic productions in efficiency of drilling rigs. The results on efficiency show a wide range of scores, with efficiencies ranging from 31.79% to 100%, being low the sample efficiency average. There is consonance between the model results and the practices adopted by Petrobras. The results strengthen the DEA as an important tool in studies of efficiency with possibility to use for analysis other types of contracts. In terms of theoretical findings, the results reinforce the arguments that there are situations in which the strategies of the organizations, in terms of use of assets and services of high specificity, do not necessarily follow what is recommended by the mainstream of TCE

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde há alguns anos a esta parte, muito se tem discutido a temática do Bullying nas atuais sociedades modernas e industrializadas. Com o avanço da técnica e do conhecimento científico, só recentemente se lhe dá a devida atenção, dado os danos que pode causar a médio e longo prazo em todos os intervenientes nesse fenómeno de exteriorização da violência. Embora se tenham desenhado múltiplas estratégias para a sua explicação e combate, continua a ser um tema muito atual, pois embora pesem os modelos de intervenção de certa forma já padronizados, há sempre a necessidade de os adaptar ao contexto de cada caso. Desta forma este tema continua e continuará a ser alvo de estudo e aprofundamento. Neste trabalho, analisaremos toda a temática do Bullying, desde a sua conceptualização, passando pela sua etiologia, fatores de risco, intervenientes, bem como o seu diagnóstico, combate e prevenção. Partindo da premissa de que este tema envolve uma série de variáveis que vão para além dos problemas comportamentais e/ou da indisciplina, tais como variáveis culturais, sociofamiliares, escolares, pessoais/psicológicas e psicopatológicas, é nesta ultima variável que iremos focar mais especificamente o nosso trabalho, nomeadamente no estabelecimento de uma correlação entre o Bullying e a Perturbação de Défice de Atenção e Hiperatividade. Nos alunos mais jovens, o défice de atenção com hiperatividade, a depressão bem como a ansiedade, são problemas paralelos ao Bullying. Dado o facto de as crianças com estas caraterísticas serem mais vulneráveis à violência, pois têm dificuldades de integração no seio escolar e/ou no grupo de pares, podemos afirmar desde já e com bastante veemência que estas pertencem a um grupo com um enorme fator de risco para se envolver em comportamentos relacionados com o Bulliyng em toda a sua abrangência.