920 resultados para multiple change-points
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Seasonal variations in the stable isotopic composition of snow and meltwater were investigated in a sub-arctic, mountainous, but non-glacial, catchment at Okstindan in northern Norway based on analyses of delta(18)O and deltaD. Samples were collected during four field periods (August 1998; April 1999; June 1999 and August 1999) at three sites lying on an altitudinal transect (740-970 m a.s.l.). Snowpack data display an increase in the mean values of delta(18)O (increasing from a mean value of - 13.51 to - 11.49% between April and August), as well as a decrease in variability through the melt period. Comparison with a regional meteoric water line indicates that the slope of the delta(18)O - deltaD line for the snowpacks decreases over the same period, dropping from 7.49 to approximately 6.2. This change points to the role of evaporation in snowpack ablation and is confirmed by the vertical profile of deuterium excess. Snowpack seepage data, although limited, also suggest reduced values of deltaD, as might be associated with local evaporation during meltwater generation. In general, meltwaters were depleted in delta(18)O relative to the source snowpack at the peak of the melt (June), but later in the year (August) the difference between the two was not statistically significant. The diurnal pattern of isotopic composition indicates that the most depleted meltwaters coincide with the peak in temperature and, hence, meltwater production.
Resumo:
Most haptic environments are based on single point interactions whereas in practice, object manipulation requires multiple contact points between the object, fingers, thumb and palm. The Friction Cone Algorithm was developed specifically to work well in a multi-finger haptic environment where object manipulation would occur. However, the Friction Cone Algorithm has two shortcomings when applied to polygon meshes: there is no means of transitioning polygon boundaries or feeling non-convex edges. In order to overcome these deficiencies, Face Directed Connection Graphs have been developed as well as a robust method for applying friction to non-convex edges. Both these extensions are described herein, as well as the implementation issues associated with them.
Resumo:
Bayesian Model Averaging (BMA) is used for testing for multiple break points in univariate series using conjugate normal-gamma priors. This approach can test for the number of structural breaks and produce posterior probabilities for a break at each point in time. Results are averaged over specifications including: stationary; stationary around trend and unit root models, each containing different types and number of breaks and different lag lengths. The procedures are used to test for structural breaks on 14 annual macroeconomic series and 11 natural resource price series. The results indicate that there are structural breaks in all of the natural resource series and most of the macroeconomic series. Many of the series had multiple breaks. Our findings regarding the existence of unit roots, having allowed for structural breaks in the data, are largely consistent with previous work.
Resumo:
The emergence of high-density wireless local area network (WLAN) deployments in recent years is a testament to the insatiable demands for wireless broadband services. The increased density of WLAN deployments brings with it the potential of increased capacity, extended coverage, and exciting new applications. However, the corresponding increase in contention and interference can significantly degrade throughputs, unless new challenges in channel assignment are effectively addressed. In this paper, a client-assisted channel assignment scheme that can provide enhanced throughput is proposed. A study on the impact of interference on throughput with multiple access points (APs)is first undertaken using a novel approach that determines the possibility of parallel transmissions. A metric with a good correlation to the throughput, i.e., the number of conflict pairs, is used in the client-assisted minimum conflict pairs (MICPA) scheme. In this scheme, measurements from clients are used to assist the AP in determining the channel with the minimum number of conflict pairs to maximize its expected throughput. Simulation results show that the client-assisted MICPA scheme can provide meaningful throughput improvements over other schemes that only utilize the AP’s measurements.
Resumo:
The components of many signaling pathways have been identified and there is now a need to conduct quantitative data-rich temporal experiments for systems biology and modeling approaches to better understand pathway dynamics and regulation. Here we present a modified Western blotting method that allows the rapid and reproducible quantification and analysis of hundreds of data points per day on proteins and their phosphorylation state at individual sites. The approach is of particular use where samples show a high degree of sample-to-sample variability such as primary cells from multiple donors. We present a case study on the analysis of >800 phosphorylation data points from three phosphorylation sites in three signaling proteins over multiple time points from platelets isolated from ten donors, demonstrating the technique's potential to determine kinetic and regulatory information from limited cell numbers and to investigate signaling variation within a population. We envisage the approach being of use in the analysis of many cellular processes such as signaling pathway dynamics to identify regulatory feedback loops and the investigation of potential drug/inhibitor responses, using primary cells and tissues, to generate information about how a cell's physiological state changes over time.
Resumo:
Personalized communication is when the marketing message is adapted to each individual by using information from a databaseand utilizing it in the various, different media channels available today. That gives the marketer the possibility to create a campaign that cuts through today’s clutter of marketing messages and gets the recipients attention. PODi is a non-profit organization that was started with the aim of contributing knowledge in the field of digital printingtechnologies. They have created a database of case studies showing companies that have successfully implemented personalizedcommunication in their marketing campaigns. The purpose of the project was therefore to analyze PODi case studies with the main objective of finding out if/how successfully the PODi-cases have been and what made them so successful. To collect the data found in the PODi cases the authors did a content analysis with a sample size of 140 PODi cases from the year 2008 to 2010. The study was carried out by analyzing the cases' measurable ways of success: response rate, conversion rate, visited PURL (personalized URL:s) and ROI (Return On Investment). In order to find out if there were any relationships to be found between the measurable result and what type of industry, campaign objective and media vehicle that was used in the campaign, the authors put up different research uestions to explore that. After clustering and merging the collected data the results were found to be quite spread but shows that the averages of response rates, visited PURL and conversion rates were consistently very high. In the study the authors also collected and summarized what the companies themselves claim to be the reasons for success with their marketing campaigns. The resultshows that the creation of a personalized campaign is complex and dependent on many different variables. It is for instance ofgreat importance to have a well thought-out plan with the campaign and to have good data and insights about the customer in order to perform creative personalization. It is also important to make it easy for the recipient to reply, to use several media vehicles for multiple touch points and to have an attractive and clever design.
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Paracoccidioides brasiliensis is a thermally dimorphic fungus, and causes the most prevalent systemic mycosis in Latin America. Infection is initiated by inhalation of conidia or mycelial fragments by the host, followed by further differentiation into the yeast form. Information regarding gene expression by either form has rarely been addressed with respect to multiple time points of growth in culture. Here, we report on the construction of a genomic DNA microarray, covering approximately 25% of the genome of the organism, and its utilization in identifying genes and gene expression patterns during growth in vitro. Cloned, amplified inserts from randomly sheared genomic DNA (gDNA) and known control genes were printed onto glass slides to generate a microarray of over 12 000 elements. To examine gene expression, mRNA was extracted and amplified from mycelial or yeast cultures grown in semi-defined medium for 5, 8 and 14 days. Principal components analysis and hierarchical clustering indicated that yeast gene expression profiles differed greatly from those of mycelia, especially at earlier time points, and that mycelial gene expression changed less than gene expression in yeasts over time. Genes upregulated in yeasts were found to encode proteins shown to be involved in methionine/cysteine metabolism, respiratory and metabolic processes (of sugars, amino acids, proteins and lipids), transporters (small peptides, sugars, ions and toxins), regulatory proteins and transcription factors. Mycelial genes involved in processes such as cell division, protein catabolism, nucleotide biosynthesis and toxin and sugar transport showed differential expression. Sequenced clones were compared with Histoplasma capsulatum and Coccidioides posadasii genome sequences to assess potentially common pathways across species, such as sulfur and lipid metabolism, amino acid transporters, transcription factors and genes possibly related to virulence. We also analysed gene expression with time in culture and found that while transposable elements and components of respiratory pathways tended to increase in expression with time, genes encoding ribosomal structural proteins and protein catabolism tended to sharply decrease in expression over time, particularly in yeast. These findings expand our knowledge of the different morphological forms of P. brasiliensis during growth in culture.
Resumo:
Pós-graduação em Biopatologia Bucal - ICT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Die Nichtlineare Zeitreihenanalyse konnte in den letzten Jahren im Rahmen von Laborexperimenten ihre prinzipelle Brauchbarkeit beweisen. Allerdings handelte es sich in der Regel um ausgewählte oder speziell konstruierte nichtlineare Systeme. Sieht man einmal von der Überwachung von Prozessen und Produkten ab, so sind Anwendungen auf konkrete, vorgegebene dynamische Probleme im industriellen Bereich kaum bekannt geworden. Ziel dieser Arbeit war es, an Hand von zwei Problemen aus der technischen Praxis zu untersuchen, ob die Anwendung des kanonischen Schemas der Nichtlinearen Zeitreihenanalyse auch dort zu brauchbaren Resultaten führt oder ob Modifikationen (Vereinfachungen oder Erweiterungen) notwendig werden. Am Beispiel der Herstellung von optischen Oberflächen durch Hochpräzisionsdrehbearbeitung konnte gezeigt werden, daß eine aktive Störungskompensation in Echtzeit mit einem speziell entwickelten nichtlinearen Vorhersagealgorithmus möglich ist. Standardverfahren der Nichtlinearen Zeitreihenanalyse beschreiten hier den allgemeinen, aber sehr aufwendigen Weg über eine möglichst vollständige Phasenraumrekonstruktion. Das neue Verfahren verzichtet auf viele der kanonischen Zwischenschritte. Dies führt zu einererheblichen Rechenzeitersparnis und zusätzlich zu einer wesentlich höheren Stabilität gegenüber additivem Meßrauschen. Mit den berechneten Vorhersagen der unerwünschten Maschinenschwingungen wurde eine Störungskompensation realisiert, die die Oberflächengüte des bearbeiteten Werkstücks um 20-30% verbesserte. Das zweite Beispiel betrifft die Klassifikation von Körperschallsignalen, die zur Überwachung von Zerspansprozessen gemessen werden. Diese Signale zeigen, wie auch viele andere Prozesse im Bereich der Produktion, ein hochgradig nichtstationäres Verhalten. Hier versagen die Standardverfahren der Nichtlinearen Datenanalyse, die FT- bzw. AAFT-Surrogate benutzen. Daher wurde eine neue Klasse von Surrogatdaten zum Testen der Nullhypothese nichtstationärer linearer stochastischer Prozesse entwickelt, die in der Lage ist, zwischen deterministischen nichtlinear chaotischen und stochastischen linearen nichtstationären Zeitreihen mit change points zu unterscheiden. Damit konnte gezeigt werden, daß die untersuchten Köperschallsignale sich statistisch signifikant einer nichtstationären stochastischen Folge von einfachen linearen Prozessen zuordnen lassen und eine Interpretation als nichtlineare chaotische Zeitreihe nicht erforderlich ist.
Resumo:
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.