990 resultados para Log cabins
Resumo:
STATEMENT OF PROBLEM: Wear of methacrylate artificial teeth resulting in vertical loss is a problem for both dentists and patients. PURPOSE: The purpose of this study was to quantify wear of artificial teeth in vivo and to relate it to subject and tooth variables. MATERIAL AND METHODS: Twenty-eight subjects treated with complete dentures received 2 artificial tooth materials (polymethyl methacrylate (PMMA)/double-cross linked PMMA fillers; 35%/59% (SR Antaris DCL, SR Postaris DCL); experimental 48%/46%). At baseline and after 12 months, impressions of the dentures were poured with improved stone. After laser scanning, the casts were superimposed and matched. Maximal vertical loss (mm) and volumetric loss (mm(3)) were calculated for each tooth and log-transformed to reduce variability. Volumetric loss was related to the occlusally active surface area. Linear mixed models were used to study the influence of the factors jaw, tooth, and material on adjusted (residual) wear values (alpha=.05). RESULTS: Due to drop outs (n=5) and unmatchable casts (n=3), 69% of all teeth were analyzed. Volumetric loss had a strong linear relationship to surface area (P<.001); this was less pronounced for vertical loss (P=.004). The factor showing the highest influence was the subject. Wear was tooth dependent (increasing from incisors to molars). However, these differences diminished once the wear rates were adjusted for occlusal area, and only a few remained significant (anterior versus posterior maxillary teeth). Another influencing factor was the age of the subject. CONCLUSIONS: Clinical wear of artificial teeth is higher than previously measured or expected. The presented method of analyzing wear of artificial teeth using a laser-scanning device seemed suitable.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
The first AO comprehensive pediatric long-bone fracture classification system has been proposed following a structured path of development and validation with experienced pediatric surgeons. A Web-based multicenter agreement study involving 70 surgeons in 15 clinics and 5 countries was conducted to assess the reliability and accuracy of this classification when used by a wide range of surgeons with various levels of experience. Training was provided at each clinic before the session. Using the Internet, participants could log in at any time and classify 275 supracondylar, radius, and tibia fractures at their own pace. The fracture diagnosis was made following the hierarchy of the classification system using both clinical terminology and codes. kappa coefficients for the single-surgeon diagnosis of epiphyseal, metaphyseal, or diaphyseal fracture type were 0.66, 0.80, and 0.91, respectively. Median accuracy estimates for each bone and type were all greater than 80%. Depending on their experience and specialization, surgeons greatly varied in their ability to classify fractures. Pediatric training and at least 2 years of experience were associated with significant improvement in reliability and accuracy. Kappa coefficients for diagnosis of specific child patterns were 0.51, 0.63, and 0.48 for epiphyseal, metaphyseal, and diaphyseal fractures, respectively. Identified reasons for coding discrepancies were related to different understandings of terminology and definitions, as well as poor quality radiographic images. Results supported some minor adjustments in the coding of fracture type and child patterns. This classification system received wide acceptance and support among the surgeons involved. As long as appropriate training could be performed, the system classification was reliable, especially among surgeons with a minimum of 2 years of clinical experience. We encourage broad-based consultation between surgeons' international societies and the use of this classification system in the context of clinical practice as well as prospectively for clinical studies.
Resumo:
Estudo descritivo e analítico, de base populacional, realizado com uma amostra aleatória e probabilística de 340 hipertensos, representativa da Estratégia Saúde da Família (ESF) de João Pessoa, PB. O estudo compõe a primeira parte de uma coorte iniciada em 2008. O instrumento utilizado foi adaptado do Primary Care Assessment Tool, revalidado no Brasil. A regressão logística avaliou a associação entre o controle pressórico, as variáveis sócio-demográficas e o indicador de adesão/vínculo. Dentre os 340 hipertensos, 32,6% era acompanhado pela ESF e 89,1% apresentou adesão/vínculo satisfatória. Os idosos apresentaram maior chance de controlar a pressão, o que sugere uma percepção melhor do autocuidado e maior adesão ao tratamento. O estudo permitiu dar visibilidade à problemática do controle da hipertensão por meio da avaliação do serviço. Espera-se que esse modelo possa ser adotado em outras localidades, gerando parâmetros para comparações entre municípios distintos.
Resumo:
A identificação de variáveis associadas ao tipo de atenção domiciliária (AD) dos usuários do Sistema Único de Saúde (SUS) contribui para a gestão do cuidado na Rede de Atenção à Saúde (RAS). Objetiva-se identificar variáveis associadas ao tipo de AD dos usuários em Unidades Básicas de Saúde (UBS) selecionadas de Belo Horizonte. Estudo transversal em duas UBS com todos os usuários (n=114) em AD da área de abrangência. Utilizou-se a análise de regressão logística múltipla para seleção (stepwise) de variáveis significativas. Obteve-se maior comprometimento clínico dos usuários (OR=27,47), estado emocional triste (OR=24,36), risco para úlcera por pressão pela escala de Braden (OR=7,6) e a semidependência para as AVD pelo índice de Katz (OR=63,8) como fortemente associadas ao tipo de AD (p < 0,05). As variáveis fundadas no contexto social, familiar e clínico dos sujeitos subsidiam a abordagem integral e a tomada de decisão da equipe de saúde.
Resumo:
O objetivo da pesquisa foi identificar o padrão de aleitamento materno exclusivo nos primeiros seis meses de vida de crianças nascidas em um Hospital Amigo da Criança e os fatores que contribuíram para o desmame precoce. Estudo de coorte prospectivo com 261 mães e crianças. Os dados foram avaliados utilizando-se a análise de sobrevivência através da construção da curva de Kaplan-Meier e teste de Log-Rank para a análise univariada. Foi realizada análise multivariada utilizando-se o modelo de Regressão de Cox com riscos proporcionais. Ao longo dos seis meses, o aleitamento materno exclusivo praticado com 30, 90, 120, 150 e 180 dias foi 75%, 52%, 33%, 19% e 5,7%, respectivamente. Na análise multivariada, as variáveis que mostraram risco para o desmame precoce foram a intercorrência mamária hospitalar e, na consulta de retorno, a posição inadequada e a associação das duas anteriores. A Iniciativa Hospital Amigo da Criança favoreceu o aleitamento materno exclusivo.
Resumo:
Contexte : La dialyse péritonéale (DP) est une méthode d'épuration extra-rénale qui utilise les propriétés physiologiques du péritoine comme membrane de dialyse. Cette technique requiert la présence d'un cathéter placé chirurgicalement dans le cul-de-sac de Douglas pour permettre l'instillation d'une solution de dialyse : le dialysat. Une des complications redoutée de cette technique est la survenue de péritonites infectieuses qui nécessitent l'administration rapide d'une antibiothérapie adéquate. Les péritonites peuvent parfois entrainer le retrait du cathéter de dialyse avec un échec définitif de la technique, ou plus rarement entrainer le décès du patient. Cette étude s'intéresse aux facteurs prédictifs de cette complication. Elle recense les germes impliqués et leur sensibilité aux différents antibiotiques. Cette étude analyse également les conséquences des péritonites, telles que la durée moyenne des hospitalisations, les échecs de la technique nécessitant un transfert définitif en hémodialyse et la survenue de décès. Méthode : Il s'agit d'une étude rétrospective monocentrique portant sur le dossier des patients inclus dans le programme de dialyse péritonéale du CHUV entre le 1er janvier 1995 et le 31 décembre 2010. Résultats : Cette étude inclus 108 patients, dont 65 hommes et 43 femmes. L'âge moyen est de 52.5 ans ± 17.84 (22-87). On répertorie 113 épisodes de péritonite pour une durée cumulative de 2932.24 mois x patients. L'incidence globale de péritonite s'élève à 1 épisode / 25.95 (mois x patient). La médiane de survie globale sans péritonite est de 23.56 mois. Une variabilité intergroupe statistiquement significative en matière de survie sans péritonite est démontrée entre les patients autonomes et non- autonomes [Log Rank (Mantel-Cox) :0.04], entre les patients diabétiques et non diabétiques [Log Rank (Mantel-Cox) : 0.002] et entre les patients cumulant un score de Charlson supérieur à 5 et ceux cumulant un score inférieur ou égal à 5 (Log Rank (Mantel-Cox) : 0.002). Une différence statistiquement significative en matière de survie de la technique a également pu être démontrée entre les patients autonomes et 2 non-autonome [Log Rank (Mantel-Cox) < 0.001], et entre les patients cumulant un score de Charlson supérieur ou inférieur ou égal à 5 [Log Rank (Mantel-Cox) : 0.047]. Le staphylococcus epidermidis est le pathogène le plus fréquemment isolé lors des péritonites (23.9%). Ce germe présente une sensibilité de 40.74% à l'oxacilline. Aucun cas de péritonite à MRSA n'a été enregistré dans ce collectif de patients. Une péritonite a causé la mort d'un patient (<1%). Conclusion : L'incidence de péritonite calculée satisfait les recommandations de la Société Internationale de Dialyse Péritonéale (ISPD). Une variabilité intergroupe statistiquement significative en terme de survie sans péritonite est mis en évidence pour : l'autonomie, le statut métabolique et le score de comorbidité de Charlson. Une variabilité intergroupe statistiquement significative en terme de survie de la technique est également démontrée pour : l'autonomie et le score de comorbidité de Charlson. Les statistiques de sensibilité mettent en évidence une excellente couverture antibiotique sur les germes isolés par le traitement empirique en vigueur (vancomycine + ceftazidime). La mortalité relative aux péritonites est extrêmement basse dans ce collectif de patients.
Resumo:
OBJECTIVES: (1) To evaluate the changes in surface roughness and gloss after simulated toothbrushing of 9 composite materials and 2 ceramic materials in relation to brushing time and load in vitro; (2) to assess the relationship between surface gloss and surface roughness. METHODS: Eight flat specimens of composite materials (microfilled: Adoro, Filtek Supreme, Heliomolar; microhybrid: Four Seasons, Tetric EvoCeram; hybrid: Compoglass F, Targis, Tetric Ceram; macrohybrid: Grandio), two ceramic materials (IPS d.SIGN and IPS Empress polished) were fabricated according to the manufacturer's instructions and optimally polished with up to 4000 grit SiC. The specimens were subjected to a toothbrushing (TB) simulation device (Willytec) with rotating movements, toothpaste slurry and at three different loads (100g/250g/350g). At hourly intervals from 1h to 10h TB, mean surface roughness Ra was measured with an optical sensor and the surface gloss (Gl) with a glossmeter. Statistical analysis was performed for log-transformed Ra data applying two-way ANOVA to evaluate the interaction between load and material and load and brushing time. RESULTS: There was a significant interaction between material and load as well as between load and brushing time (p<0.0001). The microhybrid and hybrid materials demonstrated more surface deterioration with higher loads, whereas with the microfilled resins Heliomolar and Adoro it was vice versa. For ceramic materials, no or little deterioration was observed over time and independent of the load. The ceramic materials and 3 of the composite materials (roughness) showed no further deterioration after 5h of toothbrushing. Mean surface gloss was the parameter which discriminated best between the materials, followed by mean surface roughness Ra. There was a strong correlation between surface gloss and surface roughness for all the materials except the ceramics. The evaluation of the deterioration curves of individual specimens revealed a more or less synchronous course suspecting hinting specific external conditions and not showing the true variability in relation to the tested material. SIGNIFICANCE: The surface roughness and gloss of dental materials changes with brushing time and load and thus results in different material rankings. Apart from Grandio, the hybrid composite resins were more prone to surface changes than microfilled composites. The deterioration potential of a composite material can be quickly assessed by measuring surface gloss. For this purpose, a brushing time of 10h (=72,000 strokes) is needed. In further comparative studies, specimens of different materials should be tested in one series to estimate the true variability.
Resumo:
We analyze crash data collected by the Iowa Department of Transportation using Bayesian methods. The data set includes monthly crash numbers, estimated monthly traffic volumes, site length and other information collected at 30 paired sites in Iowa over more than 20 years during which an intervention experiment was set up. The intervention consisted in transforming 15 undivided road segments from four-lane to three lanes, while an additional 15 segments, thought to be comparable in terms of traffic safety-related characteristics were not converted. The main objective of this work is to find out whether the intervention reduces the number of crashes and the crash rates at the treated sites. We fitted a hierarchical Poisson regression model with a change-point to the number of monthly crashes per mile at each of the sites. Explanatory variables in the model included estimated monthly traffic volume, time, an indicator for intervention reflecting whether the site was a “treatment” or a “control” site, and various interactions. We accounted for seasonal effects in the number of crashes at a site by including smooth trigonometric functions with three different periods to reflect the four seasons of the year. A change-point at the month and year in which the intervention was completed for treated sites was also included. The number of crashes at a site can be thought to follow a Poisson distribution. To estimate the association between crashes and the explanatory variables, we used a log link function and added a random effect to account for overdispersion and for autocorrelation among observations obtained at the same site. We used proper but non-informative priors for all parameters in the model, and carried out all calculations using Markov chain Monte Carlo methods implemented in WinBUGS. We evaluated the effect of the four to three-lane conversion by comparing the expected number of crashes per year per mile during the years preceding the conversion and following the conversion for treatment and control sites. We estimated this difference using the observed traffic volumes at each site and also on a per 100,000,000 vehicles. We also conducted a prospective analysis to forecast the expected number of crashes per mile at each site in the study one year, three years and five years following the four to three-lane conversion. Posterior predictive distributions of the number of crashes, the crash rate and the percent reduction in crashes per mile were obtained for each site for the months of January and June one, three and five years after completion of the intervention. The model appears to fit the data well. We found that in most sites, the intervention was effective and reduced the number of crashes. Overall, and for the observed traffic volumes, the reduction in the expected number of crashes per year and mile at converted sites was 32.3% (31.4% to 33.5% with 95% probability) while at the control sites, the reduction was estimated to be 7.1% (5.7% to 8.2% with 95% probability). When the reduction in the expected number of crashes per year, mile and 100,000,000 AADT was computed, the estimates were 44.3% (43.9% to 44.6%) and 25.5% (24.6% to 26.0%) for converted and control sites, respectively. In both cases, the difference in the percent reduction in the expected number of crashes during the years following the conversion was significantly larger at converted sites than at control sites, even though the number of crashes appears to decline over time at all sites. Results indicate that the reduction in the expected number of sites per mile has a steeper negative slope at converted than at control sites. Consistent with this, the forecasted reduction in the number of crashes per year and mile during the years after completion of the conversion at converted sites is more pronounced than at control sites. Seasonal effects on the number of crashes have been well-documented. In this dataset, we found that, as expected, the expected number of monthly crashes per mile tends to be higher during winter months than during the rest of the year. Perhaps more interestingly, we found that there is an interaction between the four to three-lane conversion and season; the reduction in the number of crashes appears to be more pronounced during months, when the weather is nice than during other times of the year, even though a reduction was estimated for the entire year. Thus, it appears that the four to three-lane conversion, while effective year-round, is particularly effective in reducing the expected number of crashes in nice weather.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
We obtain minimax lower bounds on the regret for the classicaltwo--armed bandit problem. We provide a finite--sample minimax version of the well--known log $n$ asymptotic lower bound of Lai and Robbins. Also, in contrast to the log $n$ asymptotic results on the regret, we show that the minimax regret is achieved by mere random guessing under fairly mild conditions on the set of allowable configurations of the two arms. That is, we show that for {\sl every} allocation rule and for {\sl every} $n$, there is a configuration such that the regret at time $n$ is at least 1 -- $\epsilon$ times the regret of random guessing, where $\epsilon$ is any small positive constant.
Resumo:
We show that the welfare of a representative consumer can be related to observable aggregatedata. To a first order, the change in welfare is summarized by (the present value of) the Solowproductivity residual and by the growth rate of the capital stock per capita. We also show thatproductivity and the capital stock suffice to calculate differences in welfare across countries, withboth variables computed as log level deviations from a reference country. These results hold forarbitrary production technology, regardless of the degree of product market competition, and applyto open economies as well if TFP is constructed using absorption rather than GDP as the measureof output. They require that TFP be constructed using prices and quantities as perceived byconsumers. Thus, factor shares need to be calculated using after-tax wages and rental rates, andwill typically sum to less than one. We apply these results to calculate welfare gaps and growthrates in a sample of developed countries for which high-quality TFP and capital data are available.We find that under realistic scenarios the United Kingdom and Spain had the highest growth ratesof welfare over our sample period of 1985-2005, but the United States had the highest level ofwelfare.
Resumo:
OBJECTIVES: To determine clinical and ultrasonographic predictors of joint replacement surgery across Europe in primary osteoarthritis (OA) of the knee. METHODS: This was a 3-year prospective study of a painful OA knee cohort (from a EULAR-sponsored, multicentre study). All subjects had clinical evaluation, radiographs and ultrasonography (US) at study entry. The rate of knee replacement surgery over the 3-year follow-up period was determined using Kaplan-Meier survival data analyses. Predictive factors for joint replacement were identified by univariate log-rank test then multivariate analysis using a Cox proportional-hazards regression model. Potential baseline predictors included demographic, clinical, radiographic and US features. RESULTS: Of the 600 original patients, 531 (88.5%), mean age 67+/-10 years, mean disease duration 6.1+/-6.9 years, had follow-up data and were analysed. During follow-up (median 3 years; range 0-4 years), knee replacement was done or required for 94 patients (estimated event rate of 17.7%). In the multivariate analysis, predictors of joint replacement were as follows: Kellgren and Lawrence radiographic grade (grade > or =III vs <III, hazards ratio (HR) = 4.08 (95% CI 2.34 to 7.12), p<0.0001); ultrasonographic knee effusion (> or =4 mm vs <4 mm) (HR = 2.63 (95% CI 1.70 to 4.06), p<0.0001); knee pain intensity on a 0-100 mm visual analogue scale (> or =60 vs <60) (HR = 1.81 (95% CI 1.15 to 2.83), p=0.01) and disease duration (> or =5 years vs <5 years) (HR=1.63 (95% CI 1.08 to 2.47), p=0.02). Clinically detected effusion and US synovitis were not associated with joint replacement in the univariate analysis. CONCLUSION: Longitudinal evaluation of this OA cohort demonstrated significant progression to joint replacement. In addition to severity of radiographic damage and pain, US-detected effusion was a predictor of subsequent joint replacement.
Resumo:
BACKGROUND: Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. METHODS: Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5 log reduction). RESULTS: Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). CONCLUSIONS: In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.