888 resultados para Two-stage stochastic model
Resumo:
La Peste dei Piccoli Ruminanti (PPR) è una patologia virale ed acuta che colpisce i piccoli ruminanti, diffusa in Africa Sub-Sahariana, in Medio Oriente ed in Asia Meridionale. Questo lavoro si propone di effettuare il primo studio epidemiologico sulla PPR nella Repubblica Araba Saharawi Democratica (RASD), che comprende i Campi Profughi Saharawi, in territorio algerino, ed i “Territori Liberati” del Sahara Occidentale, valutando la potenziale presenza, prevalenza e distribuzione del virus della PPR in questi territori. Lo studio si è basato su una metodica di campionamento “a cluster” secondo la tecnica “a due stadi”. Sono stati individuati 23 siti di campionamento dai quali sono stati raccolti un totale di 976 campioni di siero prelevati da pecore, capre e cammelli. I campioni sono stati prelevati in Marzo ed Aprile 2008. I risultati dei test Competitive-Elisa hanno evidenziato una sieroprevalenza nel 28,26% degli animali testati, benché durante la raccolta dei campioni nessun animale abbia presentato sintomi clinici riferibili alla PPR. Tra Gennaio e Maggio 2010, in seguito ad episodi di aumentata mortalità nella popolazione ovi-caprina presente nei Campi Profughi, le autorità veterinarie locali sospettarono un outbreak di PPR. Tra Maggio ed Ottobre 2010 è stato sviluppato un outbreak investigation nei Campi Profughi Saharawi con lo scopo di confermare la circolazione del PPRV. I risultati di laboratorio hanno confermato la presenza del virus nel 33,33% dei campioni. Il sequenziamento del genoma virale ha rivelato che il virus apparteneva al Lignaggio 4 e le analisi filogenetiche hanno indicato una stretta relazione (99.3%) con il PPRV isolato durante l'epidemia di PPR in Marocco del 2008.
Resumo:
The growing interest in environmental protection has led to the development of emerging biotechnologies for environmental remediation also introducing the biorefinery concept. This work mainly aimed to evaluate the applicability of innovative biotechnologies for environmental remediation and bioenergy production, throught fermentative processes. The investigated biotechnologies for waste and wastewater treatment and for the valorisation of specific feedstocks and energy recovery, were mainly focused on four research lines. 1. Biotechnology for textile wastewater treatment and water reuse that involving anaerobic and aerobic processes in combination with membrane technologies. Combinations of different treatments were also implemented for water reuse in a textile company. 2. Biotechnology for the treatment of solid waste and leachate in landfill and for biogas production. Landfill operated as Bioreactor with recirculation of the generated leachate was proposed for organic matter biostabilisation and for ammonia removal from leachate by favouring the Anammox process. 3. An innovative two-stage anaerobic process for effective codigestion of waste from the dairy industry, as cheese whey and dairy manure, was studied by combining conventional fermentative processes with a simplified system design for enhancing biomethanisation. 4) The valorisation of the glycerol waste as surplus by-product of the biodiesel industry was investigated via microbial conversion to value-added chemicals, as 1,3-propanediol. The investigated fermentative processes have been successfully implemented and reached high yields of the produced bio-chemical. The studied biotechnological systems proved to be feasible for environmental remediation and bioenergy and chemicals production.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Der Haupt-Lichtsammelkomplex (LHCII) des Photosyntheseapparates höherer Pflanzen gehört zu den häufigsten Membranproteinen der Erde. Seine Kristallstruktur ist bekannt. Das Apoprotein kann rekombinant in Escherichia coli überexprimiert und somit molekularbiologisch vielfältig verändert werden. In Detergenzlösung besitzt das denaturierte Protein die erstaunliche Fähigkeit, sich spontan zu funktionalen Protein-Pigment-Komplexen zu organisieren, welche strukturell nahezu identisch sind mit nativem LHCII. Der Faltungsprozess findet in vitro im Zeitbereich von Sekunden bis Minuten statt und ist abhängig von der Bindung der Cofaktoren Chlorophyll a und b sowie verschiedenen Carotinoiden.rn Diese Eigenschaften machen LHCII besonders geeignet für Strukturuntersuchungen mittels der elektronenparamagnetischen Resonanz (EPR)-Spektrokopie. Diese setzt eine punktspezifische Spinmarkierung des LHCII voraus, die in dieser Arbeit zunächst optimiert wurde. Einschließlich der Beiträge Anderer stand eine breite Auswahl von über 40 spinmarkierten Mutanten des LHCII bereit, einen N-terminalen „Cys walk“ eingeschlossen. Weder der hierfür notwendige Austausch einzelner Aminosäuren noch die Anknüpfung des Spinmarkers beeinträchtigten die Funktion des LHCII. Zudem konnte ein Protokoll zur Präparation heterogen spinmarkierter LHCII-Trimere entwickelt werden, also von Trimeren, die jeweils nur ein Monomer mit einer Spinmarkierung enthalten.rn Spinmarkierte Proben des Detergenz-solubilisierten LHCII wurden unter Verwendung verschiedener EPR-Techniken strukturell analysiert. Als besonders aussagekräftig erwies sich die Messung der Wasserzugänglichkeit einzelner Aminosäurepositionen anhand der Electron Spin Echo Envelope Modulation (ESEEM). In Kombination mit der etablierten Double Electron-Electron Resonance (DEER)-Technik zur Detektion von Abständen zwischen zwei Spinmarkern wurde der membranständige Kernbereich des LHCII in Lösung eingehend untersucht und strukturell der Kristallstruktur für sehr ähnlich befunden. Die Vermessung kristallographisch nicht erfasster Bereiche nahe dem N-Terminus offenbarte die schon früher detektierte Strukturdynamik der Domäne in Abhängigkeit des Oligomerisierungsgrades. Der neue, noch zu vervollständigende Datensatz aus Abstandsverteilungen und ESEEM-Wasserzugänglichkeiten monomerer wie trimerer Proben sollte in naher Zukunft die sehr genaue Modellierung der N-terminalen Domäne des LHCII ermöglichen.rn In einem weiteren Abschnitt der Arbeit wurde die Faltung des LHCII-Apoproteins bei der LHCII-Assemblierung in vitro untersucht. Vorausgegangene fluoreszenzspektroskopi-sche Arbeiten hatten gezeigt, dass die Bindung von Chlorophyll a und b in aufeinanderfolgenden Schritten im Zeitbereich von weniger als einer Minute bzw. mehreren Minuten erfolgten. Sowohl die Wasserzugänglichkeit einzelner Aminosäurepositionen als auch Spin-Spin-Abstände änderten sich in ähnlichen Zeitbereichen. Die Daten deuten darauf hin, dass die Ausbildung der mittleren Transmembran-Helix mit der schnelleren Chlorophyll-a-Bindung einhergeht, während sich die Superhelix aus den beiden anderen Transmembranhelices erst im langsameren Schritt, zusammen mit der Chlorophyll-b-Bindung, ausbildet.rn
Resumo:
Stylolites are rough paired surfaces, indicative of localized stress-induced dissolution under a non-hydrostatic state of stress, separated by a clay parting which is believed to be the residuum of the dissolved rock. These structures are the most frequent deformation pattern in monomineralic rocks and thus provide important information about low temperature deformation and mass transfer. The intriguing roughness of stylolites can be used to assess amount of volume loss and paleo-stress directions, and to infer the destabilizing processes during pressure solution. But there is little agreement on how stylolites form and why these localized pressure solution patterns develop their characteristic roughness.rnNatural bedding parallel and vertical stylolites were studied in this work to obtain a quantitative description of the stylolite roughness and understand the governing processes during their formation. Adapting scaling approaches based on fractal principles it is demonstrated that stylolites show two self affine scaling regimes with roughness exponents of 1.1 and 0.5 for small and large length scales separated by a crossover length at the millimeter scale. Analysis of stylolites from various depths proved that this crossover length is a function of the stress field during formation, as analytically predicted. For bedding parallel stylolites the crossover length is a function of the normal stress on the interface, but vertical stylolites show a clear in-plane anisotropy of the crossover length owing to the fact that the in-plane stresses (σ2 and σ3) are dissimilar. Therefore stylolite roughness contains a signature of the stress field during formation.rnTo address the origin of stylolite roughness a combined microstructural (SEM/EBSD) and numerical approach is employed. Microstructural investigations of natural stylolites in limestones reveal that heterogeneities initially present in the host rock (clay particles, quartz grains) are responsible for the formation of the distinctive stylolite roughness. A two-dimensional numerical model, i.e. a discrete linear elastic lattice spring model, is used to investigate the roughness evolving from an initially flat fluid filled interface induced by heterogeneities in the matrix. This model generates rough interfaces with the same scaling properties as natural stylolites. Furthermore two coinciding crossover phenomena in space and in time exist that separate length and timescales for which the roughening is either balanced by surface or elastic energies. The roughness and growth exponents are independent of the size, amount and the dissolution rate of the heterogeneities. This allows to conclude that the location of asperities is determined by a polimict multi-scale quenched noise, while the roughening process is governed by inherent processes i.e. the transition from a surface to an elastic energy dominated regime.rn
Resumo:
In dieser Arbeit werden vier unterschiedliche, stark korrelierte, fermionische Mehrbandsysteme untersucht. Es handelt sich dabei um ein Mehrstörstellen-Anderson-Modell, zwei Hubbard-Modelle sowie ein Mehrbandsystem, wie es sich aus einer ab initio-Beschreibung für ein korreliertes Halbmetall ergibt.rnrnDie Betrachtung des Mehrstörstellen-Anderson-Modells konzentriert sich auf die Untersuchung des Einflusses der Austauschwechselwirkung und der nicht-lokalen Korrelationen zwischen zwei Störstellen in einem einfach-kubischen Gitter. Das zentrale Resultat ist die Abstandsabhängigkeit der Korrelationen der Störstellenelektronen, welche stark von der Gitterdimension und der relativen Position der Störstellen abhängen. Bemerkenswert ist hier die lange Reichweite der Korrelationen in der Diagonalrichtung des Gitters. Außerdem ergibt sich, dass eine antiferromagnetische Austauschwechselwirkung ein Singulett zwischen den Störstellenelektronen gegenüber den Kondo-Singuletts der einzelnen Störstellen favorisiert und so den Kondo-Effekt der einzelnen Störstellen behindert.rnrnEin Zweiband-Hubbard-Modell, das Jz-Modell, wird im Hinblick auf seine Mott-Phasen in Abhängigkeit von Dotierung und Kristallfeldaufspaltung auf dem Bethe-Gitter untersucht. Die Entartung der Bänder ist durch eine unterschiedliche Bandbreite aufgehoben. Wichtigstes Ergebnis sind die Phasendiagramme in Bezug auf Wechselwirkung, Gesamtfüllung und Kristallfeldparameter. Im Vergleich zu Einbandmodellen kommen im Jz-Modell sogenannte orbital-selektive Mott-Phasen hinzu, die, abhängig von Wechselwirkung, Gesamtfüllung und Kristallfeldparameter, einerseits metallischen und andererseits isolierenden Charakter haben. Ein neuer Aspekt ergibt sich durch den Kristallfeldparameter, der die ionischen Einteilchenniveaus relativ zueinander verschiebt, und für bestimmte Werte eine orbital-selektive Mott-Phase des breiten Bands ermöglicht. Im Vergleich mit analytischen Näherungslösungen und Einbandmodellen lassen sich generische Vielteilchen- und Korrelationseffekte von typischen Mehrband- und Einteilcheneffekten differenzieren.rnrnDas zweite untersuchte Hubbard-Modell beschreibt eine magneto-optische Falle mit einer endlichen Anzahl Gitterplätze, in welcher fermionische Atome platziert sind. Es wird eine z-antiferromagnetische Phase unter Berücksichtigung nicht-lokaler Vielteilchenkorrelationen erhalten, und dabei werden bekannte Ergebnisse einer effektiven Einteilchenbeschreibung verbessert.rnrnDas korrelierte Halbmetall wird im Rahmen einer Mehrbandrechnung im Hinblick auf Korrelationseffekte untersucht. Ausgangspunkt ist eine ab initio-Beschreibung durch die Dichtefunktionaltheorie (DFT), welche dann durch die Hinzunahme lokaler Korrelationen ergänzt wird. Die Vielteilcheneffekte werden an Hand einer einfachen Wechselwirkungsnäherung verdeutlicht, und für ein Wechselwirkungsmodell in sphärischer Symmetrie präzisiert. Es ergibt sich nur eine schwache Quasiteilchenrenormierung. Besonders für röntgenspektroskopische Experimente wird eine gute Übereinstimmung erzielt.rnrnDie numerischen Ergebnisse für das Jz-Modell basieren auf Quanten-Monte-Carlo-Simulationen im Rahmen der dynamischen Molekularfeldtheorie (DMFT). Für alle anderen Systeme wird ein Mehrband-Algorithmus entwickelt und implementiert, welcher explizit nicht-diagonale Mehrbandprozesse berücksichtigt.rnrn
Resumo:
In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.
Resumo:
In this thesis different approaches for the modeling and simulation of the blood protein fibrinogen are presented. The approaches are meant to systematically connect the multiple time and length scales involved in the dynamics of fibrinogen in solution and at inorganic surfaces. The first part of the thesis will cover simulations of fibrinogen on an all atom level. Simulations of the fibrinogen protomer and dimer are performed in explicit solvent to characterize the dynamics of fibrinogen in solution. These simulations reveal an unexpectedly large and fast bending motion that is facilitated by molecular hinges located in the coiled-coil region of fibrinogen. This behavior is characterized by a bending and a dihedral angle and the distribution of these angles is measured. As a consequence of the atomistic detail of the simulations it is possible to illuminate small scale behavior in the binding pockets of fibrinogen that hints at a previously unknown allosteric effect. In a second step atomistic simulations of the fibrinogen protomer are performed at graphite and mica surfaces to investigate initial adsorption stages. These simulations highlight the different adsorption mechanisms at the hydrophobic graphite surface and the charged, hydrophilic mica surface. It is found that the initial adsorption happens in a preferred orientation on mica. Many effects of practical interest involve aggregates of many fibrinogen molecules. To investigate such systems, time and length scales need to be simulated that are not attainable in atomistic simulations. It is therefore necessary to develop lower resolution models of fibrinogen. This is done in the second part of the thesis. First a systematically coarse grained model is derived and parametrized based on the atomistic simulations of the first part. In this model the fibrinogen molecule is represented by 45 beads instead of nearly 31,000 atoms. The intra-molecular interactions of the beads are modeled as a heterogeneous elastic network while inter-molecular interactions are assumed to be a combination of electrostatic and van der Waals interaction. A method is presented that determines the charges assigned to beads by matching the electrostatic potential in the atomistic simulation. Lastly a phenomenological model is developed that represents fibrinogen by five beads connected by rigid rods with two hinges. This model only captures the large scale dynamics in the atomistic simulations but can shed light on experimental observations of fibrinogen conformations at inorganic surfaces.
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Resumo:
During a two-stage revision for prosthetic joint infections (PJI), joint aspirations, open tissue sampling and serum inflammatory markers are performed before re-implantation to exclude ongoing silent infection. We investigated the performance of these diagnostic procedures on the risk of recurrence of PJI among asymptomatic patients undergoing a two-stage revision. A total of 62 PJI were found in 58 patients. All patients had intra-operative surgical exploration during re-implantation, and 48 of them had intra-operative microbiological swabs. Additionally, 18 joint aspirations and one open biopsy were performed before second-stage reimplantation. Recurrence or persistence of PJI occurred in 12 cases with a mean delay of 218 days after re-implantation, but only four pre- or intraoperative invasive joint samples had grown a pathogen in cultures. In at least seven recurrent PJIs (58%), patients had a normal C-reactive protein (CRP, < 10 mg/l) level before re-implantation. The sensitivity, specificity, positive predictive and negative predictive values of pre-operative invasive joint aspiration and CRP for the prediction of PJI recurrence was 0.58, 0.88, 0.5, 0.84 and 0.17, 0.81, 0.13, 0.86, respectively. As a conclusion, pre-operative joint aspiration, intraoperative bacterial sampling, surgical exploration and serum inflammatory markers are poor predictors of PJI recurrence. The onset of reinfection usually occurs far later than reimplantation.
Resumo:
(11)C-ABP-688 is a selective tracer for the mGluR5 receptor. Its kinetics is fast and thus favourable for an equilibrium approach to determine receptor-related parameters. The purpose of this study was to test the hypothesis that the pattern of the (11)C-ABP688 uptake using a bolus-plus-infusion (B/I) protocol at early time points corresponds to the perfusion and at a later time point to the total distribution volume. METHODS: A bolus and a B/I study (1 h each) was performed in five healthy male volunteers. With the B/I protocol, early and late scans were normalized to gray matter, cerebellum and white matter. The same normalization was done on the maps of the total distribution volume (Vt) and K(1) which were calculated in the study with bolus only injection and the Logan method (Vt) and a two-tissue compartment model (K(1)). RESULTS: There was an excellent correlation close to the identity line between the pattern of the late uptake in the B/I study and Vt of the bolus-only study for all three normalizations. The pattern of the early uptake in the B/I study correlated well with the K(1) maps, but only when normalized to gray matter and cerebellum, not to white matter. CONCLUSION: It is demonstrated that with a B/I protocol the (11)C-ABP688 distribution in late scans reflects the pattern of the total distribution volume and is therefore a measure for the density pattern of mGluR5. The early scans following injection are related to blood flow, although not in a fully quantitative manner. The advantage of the B/I protocol is that no arterial blood sampling is required, which is advantageous in clinical studies.
Resumo:
One of the challenges for structural engineers during design is considering how the structure will respond to crowd-induced dynamic loading. It has been shown that human occupants of a structure do not simply add mass to the system when considering the overall dynamic response of the system, but interact with it and may induce changes of the dynamic properties from those of the empty structure. This study presents an investigation into the human-structure interaction based on several crowd characteristics and their effect on the dynamic properties of an empty structure. The dynamic properties including frequency, damping, and mode shapes were estimated for a single test structure by means of experimental modal analysis techniques. The same techniques were utilized to estimate the dynamic properties when the test structure was occupied by a crowd with different combinations of size, posture, and distribution. The goal of this study is to isolate the occupant characteristics in order to determine the significance of each to be considered when designing new structures to avoid crowd serviceability issues. The results are presented and summarized based on the level of influence of each characteristic. The posture that produces the most significant effects based on the scope of this research is standing with bent knees with a maximum decrease in frequency of the first mode of the empty structure by 32 percent atthe highest mass ratio. The associated damping also increased 36 times the damping of the empty structure. In addition to the analysis of the experimental data, finite element models and a two degree-of-freedom model were created. These models were used to gain an understanding of the test structure, model a crowd as an equivalent mass, and also to develop a single degree-of-freedom (SDOF) model to best represent a crowd of occupants based on the experimental results. The SDOF models created had an averagefrequency of 5.0 Hz, within the range presented in existing biomechanics research, and combined SDOF systems of the test structure and crowd were able to reproduce the frequency and damping ratios associated with experimental tests. Results of this study confirmed the existence of human-structure interaction andthe inability to simply model a crowd as only additional mass. The two degree-offreedom model determined was able to predict the change in natural frequency and damping ratio for a structure occupied by multiple group sizes in a single posture. These results and model are the preliminary steps in the development of an appropriate methodfor modeling a crowd in combination with a more complex FE model of the empty structure.
Resumo:
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Resumo:
The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.
Resumo:
The etiology of complex diseases is heterogeneous. The presence of risk alleles in one or more genetic loci affects the function of a variety of intermediate biological pathways, resulting in the overt expression of disease. Hence, there is an increasing focus on identifying the genetic basis of disease by sytematically studying phenotypic traits pertaining to the underlying biological functions. In this paper we focus on identifying genetic loci linked to quantitative phenotypic traits in experimental crosses. Such genetic mapping methods often use a one stage design by genotyping all the markers of interest on the available subjects. A genome scan based on single locus or multi-locus models is used to identify the putative loci. Since the number of quantitative trait loci (QTLs) is very likely to be small relative to the number of markers genotyped, a one-stage selective genotyping approach is commonly used to reduce the genotyping burden, whereby markers are genotyped solely on individuals with extreme trait values. This approach is powerful in the presence of a single quantitative trait locus (QTL) but may result in substantial loss of information in the presence of multiple QTLs. Here we investigate the efficiency of sequential two stage designs to identify QTLs in experimental populations. Our investigations for backcross and F2 crosses suggest that genotyping all the markers on 60% of the subjects in Stage 1 and genotyping the chromosomes significant at 20% level using additional subjects in Stage 2 and testing using all the subjects provides an efficient approach to identify the QTLs and utilizes only 70% of the genotyping burden relative to a one stage design, regardless of the heritability and genotyping density. Complex traits are a consequence of multiple QTLs conferring main effects as well as epistatic interactions. We propose a two-stage analytic approach where a single-locus genome scan is conducted in Stage 1 to identify promising chromosomes, and interactions are examined using the loci on these chromosomes in Stage 2. We examine settings under which the two-stage analytic approach provides sufficient power to detect the putative QTLs.