952 resultados para Multivariate statistical method
Resumo:
Abstract Background Patients under haemodialysis are considered at high risk to acquire hepatitis B virus (HBV) infection. Since few data are reported from Brazil, our aim was to assess the frequency and risk factors for HBV infection in haemodialysis patients from 22 Dialysis Centres from Santa Catarina State, south of Brazil. Methods This study includes 813 patients, 149 haemodialysis workers and 772 healthy controls matched by sex and age. Serum samples were assayed for HBV markers and viraemia was detected by nested PCR. HBV was genotyped by partial S gene sequencing. Univariate and multivariate statistical analyses with stepwise logistic regression analysis were carried out to analyse the relationship between HBV infection and the characteristics of patients and their Dialysis Units. Results Frequency of HBV infection was 10.0%, 2.7% and 2.7% among patients, haemodialysis workers and controls, respectively. Amidst patients, the most frequent HBV genotypes were A (30.6%), D (57.1%) and F (12.2%). Univariate analysis showed association between HBV infection and total time in haemodialysis, type of dialysis equipment, hygiene and sterilization of equipment, number of times reusing the dialysis lines and filters, number of patients per care-worker and current HCV infection. The logistic regression model showed that total time in haemodialysis, number of times of reusing the dialysis lines and filters, and number of patients per worker were significantly related to HBV infection. Conclusions Frequency of HBV infection among haemodialysis patients at Santa Catarina state is very high. The most frequent HBV genotypes were A, D and F. The risk for a patient to become HBV positive increase 1.47 times each month of haemodialysis; 1.96 times if the dialysis unit reuses the lines and filters ≥ 10 times compared with haemodialysis units which reuse < 10 times; 3.42 times if the number of patients per worker is more than five. Sequence similarity among the HBV S gene from isolates of different patients pointed out to nosocomial transmission.
Resumo:
Running economy (RE), i.e. the oxygen consumption at a given submaximal speed, is an important determinant of endurance running performance. So far, investigators have widely attempted to individuate the factors affecting RE in competitive athletes, focusing mainly on the relationships between RE and running biomechanics. However, the current results are inconsistent and a clear mechanical profile of an economic runner has not been yet established. The present work aimed to better understand how the running technique influences RE in sub-elite middle-distance runners by investigating the biomechanical parameters acting on RE and the underlying mechanisms. Special emphasis was given to accounting for intra-individual variability in RE at different speeds and to assessing track running rather than treadmill running. In Study One, a factor analysis was used to reduce the 30 considered mechanical parameters to few global descriptors of the running mechanics. Then, a biomechanical comparison between economic and non economic runners and a multiple regression analysis (with RE as criterion variable and mechanical indices as independent variables) were performed. It was found that a better RE was associated to higher knee and ankle flexion in the support phase, and that the combination of seven individuated mechanical measures explains ∼72% of the variability in RE. In Study Two, a mathematical model predicting RE a priori from the rate of force production, originally developed and used in the field of comparative biology, was adapted and tested in competitive athletes. The model showed a very good fit (R2=0.86). In conclusion, the results of this dissertation suggest that the very complex interrelationships among the mechanical parameters affecting RE may be successfully dealt with through multivariate statistical analyses and the application of theoretical mathematical models. Thanks to these results, coaches are provided with useful tools to assess the biomechanical profile of their athletes. Thus, individual weaknesses in the running technique may be identified and removed, with the ultimate goal to improve RE.
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
PROBLEM In the last few years farm tourism or agritourism as it is also referred to has enjoyed increasing success because of its generally acknowledged role as a promoter of economic and social development of rural areas. As a consequence, a plethora of studies have been dedicated to this tourist sector, focusing on a variety of issues. Nevertheless, despite the difficulties of many farmers to orient their business towards potential customers, the contribution of the marketing literature has been moderate. PURPOSE This dissertation builds upon studies which advocate the necessity of farm tourism to innovate itself according to the increasingly demanding needs of customers. Henceforth, the purpose of this dissertation is to critically evaluate the level of professionalism reached in the farm tourism market within a marketing approach. METHODOLOGY This dissertation is a cross-country perspective incorporating the marketing of farm tourism studied in Germany and Italy. Hence, the marketing channels of this tourist sector are examined both from the supply and the demand side by means of five exploratory studies. The data collection has been conducted in the timeframe of 2006 to 2009 in manifold ways (online survey, catalogues of industry associations, face-to-face interviews, etc.) according to the purpose of the research of each study project. The data have been analyzed using multivariate statistical analysis. FINDINGS A comprehensive literature review provides the state of the art of the main differences and similarities of farm tourism in the two countries of study. The main findings contained in the empirical chapters provide insights on many aspects of agritourism including how the expectations of farm operators and customers differ, which development scenarios of farm tourism are more likely to meet individuals’ needs, how new technologies can impact the demand for farm tourism, etc. ORIGINALITY/VALUE The value of this study is in the investigation of the process by which farmers’ participation in the development of this sector intersects with consumer consumption patterns. Focusing on this process should allow farm operators and others including related businesses to more efficiently allocate resources.
Resumo:
Widespread occurrence of pharmaceuticals residues has been reported in aquatic ecosystems. However, their toxic effects on aquatic biota remain unclear. Generally, the acute toxicity has been assessed in laboratory experiments, while chronic toxicity studies have rarely been performed. Of importance appears also the assessment of mixture effects, since pharmaceuticals never occur in waters alone. The aim of the present work is to evaluate acute and chronic toxic response in the crustacean Daphnia magna exposed to single pharmaceuticals and mixtures. We tested fluoxetine, a SSRI widely prescribed as antidepressant, and propranolol, a non selective β-adrenergic receptor-blocking agent used to treat hypertension. Acute immobilization and chronic reproduction tests were performed according to OECD guidelines 202 and 211, respectively. Single chemicals were first tested separately. Toxicity of binary mixtures was then assessed using a fixed ratio experimental design with concentrations based on Toxic Units. The conceptual model of Concentration Addition was adopted in this study, as we assumed that the mixture effect mirrors the sum of the single substances for compounds having similar mode of action. The MixTox statistical method was applied to analyze the experimental results. Results showed a significant deviation from CA model that indicated antagonism between chemicals in both the acute and the chronic mixture tests. The study was integrated assessing the effects of fluoxetine on a battery of biomarkers. We wanted to evaluate the organism biological vulnerability caused by low concentrations of pharmaceutical occurring in the aquatic environment. We assessed the acetylcholinesterase and glutathione s-transferase enzymatic activities and the malondialdehyde production. No treatment induced significant alteration of biomarkers with respect to the control. Biological assays and the MixTox model application proved to be useful tools for pharmaceutical risk assessment. Although promising, the application of biomarkers in Daphnia magna needs further elucidation.
Resumo:
Das Glaukom ist, nach dem Katarakt, die zweithäufigste Ursache für Erblindungen weltweit mit Milionen von Betroffenen, die von dieser zunächst weitgehend symptomfreien neurodegenerativen Erkrankung heimgesucht werden. Die Möglichkeiten auf dem Feld der Diagnose beschränken sich bislang weitestgehend auf die Messung des Augeninnendrucks und der Beurteilung des Augenhintergrundes durch einen erfahrenen Augenarzt. Eine labordiagnostische Prophylaxe ist bis heute nicht verfügbar, die Zahl unerkannter Erkrankungen dementsprechend hoch. Hierdurch geht wertvolle Zeit verloren, die man für eine effektive Therapie nutzen könnte.rnBezüglich der Pathogenese des Glaukoms geht man heute von mehreren, miteinander wechselwirkenden Pathomechanismen aus, zu denen neben mechanischen Einflüssen durch einen erhöhten IOD auch Hypoxie, verminderte Neutrophinversorgung, Exzitotoxizität, oxidativer Stress und eine Beteiligung autoimmuner Prozesse gezählt werden. Unabhängig vom Pathomechanismus folgt stets die Etablierung umfangreicher degenerativer Prozesse im Sehnervenkopf, den retinalen Ganglienzellen und den Axonen des Sehnerven, die letztlich im irreversiblen Untergang dieser Neuronen münden. Diese pathologischen Prozesse im ZNS hinterlassen auf Proteomebene Spuren, die mithilfe moderner massenspektrometrischer Methoden in Kombination mit multivariaten statistischen Methoden detektierbar und als sogenannte Biomarker-Kandidaten mit definiertem Molekulargewicht darstellbar sind. In dieser Arbeit wurde ein „Workflow“ entwickelt, der es ermöglicht, diese Biomarker-Kandidaten im Blutserum und in der Tränenflüssigkeit in einfachen, reproduzierbaren Schritten zu identifizieren und zu charakterisieren. Abweichend von der etablierten Methotik der Bottom-Up-Proteomics musste hierfür eine Methode entsprechend einer Top-Down-Philosophie entwickelt werden, die es erlaubt, die Spuren des Glaukoms im Proteom zu detektieren und zu charakterisieren.rnDies erfolgte in dieser Arbeit durch sowohl massenspektroskopischen Methoden wie SELDI-TOF® und MALDI-Tof-Tof als auch durch Bead-, Gel- und Flüssigkeits-chromatographisch-basierte Separations und Fraktionierungstechniken.rnDie erfolgreiche Kombination dieser Methoden führte zu Identifikationen einer ganzen Reihe von Biomarker-Kandidaten. Unter den identifizierten Proteinen, die bezüglich ihres korrespondierenden SELDI-Peaks im Massenbereich von Biomarker-Kandidaten liegen, finden sich Zytokine und Effektormoleküle der angeborernen Immunität, stressinduzierbare Kinasen, Faktoren, die zum Schutz der Telomeren dienen, Proliferationsmarker, neuronale Antigene und Transportproteine. Darüber hinaus wurden Komponenten identifiziert, die an der neuronalen Neutrophinversorgung beteiligt sind, neuronale Rezeptoren und Antigene, Komponenten des Komplementsystems und des MHC-I-Komplexes. All diese identifizierten Proteine sind bezüglich ihrer Funktion und möglichen Rolle innerhalb der Pathogenese des Glaukoms detailliert beschrieben und charakterisiert. Dies erlaubt einen umfassenden Einblick in alle Pathomechanismen, denen nach heutigem Kenntnisstand, eine Rolle an der Pathogenese des Glaukoms unterstellt wird.rn
Resumo:
Mehr als hundert Jahre archäologischer Forschung haben gezeigt, dass in Mayen in römischer und mittelalterlicher Zeit eines der wichtigsten europäischen Produktionszentren für die Herstellung qualitätsvoller Gebrauchskeramik bestand. Im Rahmen dieser Studie wurden vier Befundkomplexe aus Töpfereisiedlungen vom 4. bis in das 14. Jahrhundert untersucht. Genauer handelt es sich um Keramik aus zwei spätantiken Brennanlagen des 4. Jahrhunderts im Bereich der Flur „Auf der Eich“ an den Straßen „Am Sonnenhang“ und „Frankenstraße“. Weiterhin konnte Material aus zwei Töpferofenfüllungen des 5. bis 9. Jahrhunderts analysiert werden, das 1975 auf dem Grundstück 55 an der „Siegfriedstraße“ in Brennanlagen entdeckt wurde. Hinzu kam Brenngut aus elf Töpferöfen des späten 8. bis 14. Jahrhunderts, welches in den so genannten „Burggärten“ der Genovevaburg von Mayen in den Jahren 1986/87 durch die archäologische Denkmalpflege in Koblenz geborgen wurde. Die mineralogischen Untersuchungen zur Charakterisierung der „Mayener Keramik“ wurden systematisch an den Keramikmaterialien aus diesen Fundstellen durchgeführt. Mittelalterliche Keramik aus Bornheim-Walberberg, Brühl-Eckdorf, Höhr-Grenzhausen, Langerwehe, Frechen, Brühl-Pingsdorf, Paffrath, Raeren, Ratingen-Breitscheid, Siegburg-Seehofstraße, Siegburg-Scherbenhügel, Fredelsloh und Brühl-Badorf konnte für diese Arbeit als Referenzmaterialien ebenfalls untersucht werden. Provenienzanalysen wurden an Keramikproben aus 27 Fundorten, die makroskopisch nach Mayener Ware aussehen, mit mineralogischen Methoden durchgeführt, um sie der Fundregion Mayen eindeutig zuordnen zu können.rnPhasenanalyse, chemische Analyse und thermische Analyse wurden an Keramik sowie Ton durchgeführt. Die Phasenanalyse wurde zur Bestimmung der mineralischen Zusammensetzung von Grundmasse und Magerungsmittel (Röntgendiffraktometrie (XRD), Polarisationsmikroskop, Mikro-Raman-Spektroskopie) verwendet. Die chemische Zusammensetzung wurde durch Röntgenfluoreszenzanalyse (RFA) ermittelt. Elektronenstrahlmikroanalyse (ESMA) und Laser-Massenspektrometrie mit induktiv gekoppeltem Plasma (LA-ICP-MS) wurden bei den Proben, bei denen weniger als 2g Material zur Verfügung standen, eingesetzt. Brennexperimente wurden am originalen Rohstoff der Keramik aus den „Burggärten“ der Genovevaburg durchgeführt. Gebrannter Ton wurde durch Röntgendiffraktometrie (XRD), Infrarotspektroskopie (IR) und Differential-Thermoanalyse (DTA) analysiert. rnAnhand der Messergebnisse lässt sich die Mayener Keramik aus den vier Fundplätzen in zwei Typen zusammenzufassen: der mit Feldspat-reichem Sand gemagerte römische Typ und der mit Quarz-reichem Sand gemagerte mittelalterliche Typ. Die Änderung des Magerungsmittels von Feldspat- zu Quarzsand weist eine technische Entwicklung zu höheren Brenntemperaturen von der Römerzeit bis in das Mittelalter nach. Nach der Untersuchung und dem Vergleich mit den Referenzkeramikgruppen ist festzustellen, dass durch multivariate Statistikanalysen der chemischen Komponenten die Charakterisierung der Keramik und eine Differenzierung zwischen den Keramikgruppen gelingt. Diese Erkenntnisse bildeten die Basis für Provenienzanalysen. 16 Fundorte können durch Provenienzanalyse sicher als Exportregionen der Mayener Ware festgestellt werden. Gemäß den Brennexperimenten lassen sich die chemischen Reaktionen während des Brandprozesses nachvollziehen. Zwei Methoden wurden mittels Röntgendiffraktometrie (XRD) und Differential-Thermoanalyse (DTA) zur Bestimmung der Brenntemperaturen der Keramik modelliert. Die Töpferöfen der „Burggärten“ können nach der Brenntemperatur in zwei Typen zusammengefasst werden: solche mit einer Brenntemperatur unter 1050°C und solche mit einer Brenntemperatur über 1050°C.rn
Resumo:
In dieser Arbeit wird der Entwurf, der Aufbau, die Inbetriebnahme und die Charakterisierung einer neuartigen Penning-Falle im Rahmen des Experiments zur Bestimmung des g-Faktors des Protons präsentiert. Diese Falle zeichnet sich dadurch aus, dass die Magnetfeldlinien eines äußeren homogenen Magnetfeldes durch eine ferromagnetische Ringelektrode im Zentrum der Falle verzerrt werden. Der inhomogene Anteil des resultierenden Magnetfeldes, die sogenannte magnetische Flasche, lässt sich durch den Koeffizient B2 = 297(10) mT/mm2 des Terms zweiter Ordnung der Ortsabhängigkeit des Feldes quantifizieren. Eine solche ungewöhnlich starke Feldinhomogenität ist Grundvoraussetzung für den Nachweis der Spinausrichtung des Protons mittels des kontinuierlichen Stern-Gerlach-Effektes. Dieser Effekt basiert auf der im inhomogenen Magnetfeld entstehenden Kopplung des Spin-Freiheitsgrades des gefangenen Protons an eine seiner Eigenfrequenzen. Ein Spin-Übergang lässt sich so über einen Frequenzsprung detektieren. Dabei ist die nachzuweisende Änderung der Frequenz proportional zu B2 und zum im Fall des Protons extrem kleinen Verhältnis zwischen seinem magnetischen Moment nund seiner Masse. Die durch die benötigte hohe Inhomogenität des Magnetfeldes bedingten technischen Herausforderungen erfordern eine fundierte Kenntnis und Kontrolle der Eigenschaften der Penning-Falle sowie der experimentellen Bedingungen. Die in der vorliegenden Arbeit entwickelte Penning-Falle ermöglichte den erstmaligen zerstörungsfreien Nachweis von Spin-Quantensprüngen eines einzelnen gefangenen Protons, was einen Durchbruch für das Experiment zur direkten Bestimmung des g-Faktors mit der angestrebten relativen Genauigkeit von 10−9 darstellte. Mithilfe eines statistischen Verfahrens ließen sich die Larmor- und die Zyklotronfrequenz des Protons im inhomogenen Magnetfeld der Falle ermitteln. Daraus wurde der g-Faktor mit einer relativen Genauigkeit von 8,9 × 10−6 bestimmt. Die hier vorgestellten Messverfahren und der experimentelle Aufbau können auf ein äquivalentes Experiment zur Bestimmung des g-Faktors des Antiprotons zum Erreichen der gleichen Messgenauigkeit übertragen werden, womit der erste Schritt auf dem Weg zu einem neuen zwingenden Test der CPT-Symmetrie im baryonischen Sektor gemacht wäre.
Resumo:
INTRODUCTION: Apical surgery has seen continuous development with regard to equipment and surgical technique. However, there is still a shortage of evidence-based information regarding healing determinants. The objective of this meta-analysis was to review clinical articles on apical surgery with root-end filling in order to assess potential prognostic factors. METHODS: An electronic search of PubMed and Cochrane databases was performed in 2008. Only studies with clearly defined healing criteria were included, and data for at least two categories per prognostic factor had to be reported. Prognostic factors were divided into patient-related, tooth-related, or treatment-related factors. The reported percentages of healed teeth ("the healed rate") were pooled per category. The statistical method of Mantel-Haenszel was applied to estimate the odds ratios and their 95% confidence intervals. RESULTS: With regard to tooth-related factors, the following categories were significantly associated with higher healed rates: cases without preoperative pain or signs, cases with good density of root canal filling, and cases with absence or size < or = 5 mm of periapical lesion. With regard to treatment-related factors, cases treated with the use of an endoscope tended to have higher healed rates than cases without the use of an endoscope. CONCLUSIONS: Although the clinician may be able to control treatment-related factors (by choosing a certain technique), patient- and tooth-related factors are usually beyond the surgeon's power. Nevertheless, patient- and tooth-related factors should be considered as important prognostic determinants when planning or weighing apical surgery against treatment alternatives.
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Resumo:
ims: Periodic leg movements in sleep (PLMS) are a frequent finding in polysomnography. Most patients with restless legs syndrome (RLS) display PLMS. However, since PLMS are also often recorded in healthy elderly subjects, the clinical significance of PLMS is still discussed controversially. Leg movements are seen concurrently with arousals in obstructive sleep apnoea (OSA) may also appear periodically. Quantitative assessment of the periodicity of LM/PLM as measured by inter movement intervals (IMI) is difficult. This is mainly due to influencing factors like sleep architecture and sleep stage, medication, inter and intra patient variability, the arbitrary amplitude and sequence criteria which tend to broaden the IMI distributions or make them even multi-modal. Methods: Here a statistical method is presented that enables eliminating such effects from the raw data before analysing the statistics of IMI. Rather than studying the absolute size of IMI (measured in seconds) we focus on the shape of their distribution (suitably normalized IMI). To this end we employ methods developed in Random Matrix Theory (RMT). Patients: The periodicity of leg movements (LM) of four patient groups (10 to 15 each) showing LM without PLMS (group 1), OSA without PLMS (group 2), PLMS and OSA (group 3) as well as PLMS without OSA (group 4) are compared. Results: The IMI of patients without PLMS (groups 1 and 2) and with PLMS (groups 3 and 4) are statistically different. In patients without PLMS the distribution of normalized IMI resembles closely the one of random events. In contrary IMI of PLMS patients show features of periodic systems (e.g. a pendulum) when studied in normalized manner. Conclusions: For quantifying PLMS periodicity proper normalization of the IMI is crucial. Without this procedure important features are hidden when grouping LM/PLM over whole nights or across patients. The clinical significance of PLMS might be eluded when properly separating random LM from LM that show features of periodic systems.
Resumo:
Can one observe an increasing level of individual lack of orientation because of rapid social change in modern societies? This question is examined using data from a representative longitudinal survey in Germany conducted in 2002–04. The study examines the role of education, age, sex, region (east/west), and political orientation for the explanation of anomia and its development. First we present the different sources of anomie in modern societies, based on the theoretical foundations of Durkheim and Merton, and introduce the different definitions of anomia, including our own cognitive version. Then we deduce several hypotheses from the theory, which we test by means of longitudinal data for the period 2002–04 in Germany using the latent growth curve model as our statistical method. The empirical findings show that all the sociodemographic variables, including political orientation, are strong predictors of the initial level of anomia. Regarding the development of anomia over time (2002–04), only the region (west) has a significant impact. In particular, the results of a multi-group analysis show that western German people with a right-wing political orientation become more anomic over this period. The article concludes with some theoretical implications.
Resumo:
Cattle are a natural reservoir for Shiga toxigenic Escherichia coli (STEC), however, no data are available on the prevalence and their possible association with organic or conventional farming practices. We have therefore studied the prevalence of STEC and specifically O157:H7 in Swiss dairy cattle by collecting faeces from approximately 500 cows from 60 farms with organic production (OP) and 60 farms with integrated (conventional) production (IP). IP farms were matched to OP farms and were comparable in terms of community, agricultural zone, and number of cows per farm. E. coli were grown overnight in an enrichment medium, followed by DNA isolation and PCR analysis using specific TaqMan assays. STEC were detected in all farms and O157:H7 were present in 25% of OP farms and 17% of IP farms. STEC were detected in 58% and O157:H7 were evidenced in 4.6% of individual faeces. Multivariate statistical analyses of over 250 parameters revealed several risk-factors for the presence of STEC and O157:H7. Risk-factors were mainly related to the potential of cross-contamination of feeds and cross-infection of cows, and age of the animals. In general, no significant differences between the two farm types concerning prevalence or risk for carrying STEC or O157:H7 were observed. Because the incidence of human disease caused by STEC in Switzerland is low, the risk that people to get infected appears to be small despite a relatively high prevalence in cattle. Nevertheless, control and prevention practices are indicated to avoid contamination of animal products.
Resumo:
SUMMARY Split-mouth designs first appeared in dental clinical trials in the late sixties. The main advantage of this study design is its efficiency in terms of sample size as the patients act as their own controls. Cited disadvantages relate to carry-across effects, contamination or spilling of the effects of one intervention to another, period effects if the interventions are delivered at different time periods, difficulty in finding similar comparison sites within patients and the requirement for more complex data analysis. Although some additional thought is required when utilizing a split-mouth design, the efficiency of this design is attractive, particularly in orthodontic clinical studies where carry-across, period effects and dissimilarity between intervention sites does not pose a problem. Selection of the appropriate research design, intervention protocol and statistical method accounting for both the reduced variability and potential clustering effects within patients should be considered for the trial results to be valid.
Resumo:
The Atlantic subpolar gyre (SPG) is one of the main drivers of decadal climate variability in the North Atlantic. Here we analyze its dynamics in pre-industrial control simulations of 19 different comprehensive coupled climate models. The analysis is based on a recently proposed description of the SPG dynamics that found the circulation to be potentially bistable due to a positive feedback mechanism including salt transport and enhanced deep convection in the SPG center. We employ a statistical method to identify multiple equilibria in time series that are subject to strong noise and analyze composite fields to assess whether the bistability results from the hypothesized feedback mechanism. Because noise dominates the time series in most models, multiple circulation modes can unambiguously be detected in only six models. Four of these six models confirm that the intensification is caused by the positive feedback mechanism.