995 resultados para sequential methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of detecting statistically significant sequential patterns in multineuronal spike trains. These patterns are characterized by ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data-mining scheme to efficiently discover such patterns, which occur often enough in the data. Here we propose a method to determine the statistical significance of such repeating patterns. The novelty of our approach is that we use a compound null hypothesis that not only includes models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a probabilistic model that captures the counting process and use this to derive a test of significance for rejecting such a compound null hypothesis. The structure of our null hypothesis also allows us to rank-order different significant patterns. We illustrate the effectiveness of our approach using spike trains generated with a simulator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: To gain insight on the immunological processes behind cow’s milk allergy (CMA) and the development of oral tolerance. To furthermore investigate the associations of HLA II and filaggrin genotypes with humoral responses to early oral antigens. Methods: The study population was from a cohort of 6209 healthy, full-term infants who in a double-blind randomized trial received supplementary feeding at maternity hospitals (mean duration 4 days): cow’s milk (CM) formula, extensively hydrolyzed whey formula or donor breast milk. Infants who developed CM associated symptoms that subsided during elimination diet (n=223) underwent an open oral CM challenge (at mean age 7 months). The challenge was negative in 112, and in 111 it confirmed CMA, which was IgE-mediated in 83. Patients with CMA were followed until recovery, and 94 of them participated in a follow-up study at age 8-9 years. We investigated serum samples at diagnosis (mean age 7 months, n=111), one year later (19 months, n=101) and at follow-up (8.6 years, n=85). At follow-up, also 76 children randomly selected from the original cohort and without CM associated symptoms were included. We measured CM specific IgE levels with UniCAP (Phadia, Uppsala, Sweden), and β-lactoglobulin, α-casein and ovalbumin specific IgA, IgG1, IgG4 and IgG levels with enzyme-linked immunosorbent assay in sera. We applied a microarray based immunoassay to measure the binding of IgE, IgG4 and IgA serum antibodies to sequential epitopes derived from five major CM proteins at the three time points in 11 patients with active IgE-mediated CMA at age 8-9 years and in 12 patients who had recovered from IgE-mediated CMA by age 3 years. We used bioinformatic methods to analyze the microarray data. We studied T cell expression profile in peripheral blood mononuclear cell (PBMC) samples from 57 children aged 5-12 years (median 8.3): 16 with active CMA, 20 who had recovered from CMA by age 3 years, 21 non-atopic control subjects. Following in vitro β-lactoglobulin stimulation, we measured the mRNA expression in PBMCs of 12 T-cell markers (T-bet, GATA-3, IFN-γ, CTLA4, IL-10, IL-16, TGF-β, FOXP3, Nfat-C2, TIM3, TIM4, STIM-1) with quantitative real time polymerase chain reaction, and the protein expression of CD4, CD25, CD127, FoxP3 with flow cytometry. To optimally distinguish the three study groups, we performed artificial neural networks with exhaustive search for all marker combinations. For genetic associations with specific humoral responses, we analyzed 14 HLA class II haplotypes, the PTPN22 1858 SNP (R620W allele) and 5 known filaggrin null mutations from blood samples of 87 patients with CMA and 76 control subjects (age 8.0-9.3 years). Results: High IgG and IgG4 levels to β-lactoglobulin and α-casein were associated with the HLA (DR15)-DQB1*0602 haplotype in patients with CMA, but not in control subjects. Conversely, (DR1/10)-DQB1*0501 was associated with lower IgG and IgG4 levels to these CM antigens, and to ovalbumin, most significantly among control subjects. Infants with IgE-mediated CMA had lower β -lactoglobulin and α-casein specific IgG1, IgG4 and IgG levels (p<0.05) at diagnosis than infants with non-IgE-mediated CMA or control subjects. When CMA persisted beyond age 8 years, CM specific IgE levels were higher at all three time points investigated and IgE epitope binding pattern remained stable (p<0.001) compared with recovery from CMA by age 3 years. Patients with persisting CMA at 8-9 years had lower serum IgA levels to β-lactoglobulin at diagnosis (p=0.01), and lower IgG4 levels to β-lactoglobulin (p=0.04) and α-casein (p=0.05) at follow-up compared with patients who recovered by age 3 years. In early recovery, signal of IgG4 epitope binding increased while that of IgE decreased over time, and binding patterns of IgE and IgG4 overlapped. In T cell expression profile in response to β –lactoglobulin, the combination of markers FoxP3, Nfat-C2, IL-16, GATA-3 distinguished patients with persisting CMA most accurately from patients who had become tolerant and from non-atopic subjects. FoxP3 expression at both RNA and protein level was higher in children with CMA compared with non-atopic children. Conclusions: Genetic factors (the HLA II genotype) are associated with humoral responses to early food allergens. High CM specific IgE levels predict persistence of CMA. Development of tolerance is associated with higher specific IgA and IgG4 levels and lower specific IgE levels, with decreased CM epitope binding by IgE and concurrent increase in corresponding epitope binding by IgG4. Both Th2 and Treg pathways are activated upon CM antigen stimulation in patients with CMA. In the clinical management of CMA, HLA II or filaggrin genotyping are not applicable, whereas the measurement of CM specific antibodies may assist in estimating the prognosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To perform spectral analysis of noise generated by equipments and activities in a level III neonatal intensive care unit (NICU) and measure the real time sequential hourly noise levels over a 15 day period. Methods Noise generated in the NICU by individual equipments and activities were recorded with a digital spectral sound analyzer to perform spectral analysis over 0.5–8 KHz. Sequential hourly noise level measurements in all the rooms of the NICU were done for 15 days using a digital sound pressure level meter. Independent sample t test and one way ANOVA were used to examine the statistical significance of the results. The study has a 90% power to detect at least 4 dB differences from the recommended maximum of 50 dB with 95 % confidence. Results The mean noise levels in the ventilator room and stable room were 19.99 dB (A) sound pressure level (SPL) and 11.81 dB (A) SPL higher than the maximum recommended of 50 dB (A) respectively (p < 0.001). The equipments generated 19.11 dB SPL higher than the recommended norms in 1–8 KHz spectrum. The activities generated 21.49 dB SPL higher than the recommended norms in 1–8 KHz spectrum (p< 0.001). The ventilator and nebulisers produced excess noise of 8.5 dB SPL at the 0.5 KHz spectrum.Conclusion Noise level in the NICU is unacceptably high. Spectral analysis of equipment and activity noise have shown noise predominantly in the 1–8 KHz spectrum. These levels warrant immediate implementation of noise reduction protocols as a standard of care in the NICU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"We thank MrGilder for his considered comments and suggestions for alternative analyses of our data. We also appreciate Mr Gilder’s support of our call for larger studies to contribute to the evidence base for preoperative loading with high-carbohydrate fluids..."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Foliage density and leaf area index are important vegetation structure variables. They can be measured by several methods but few have been tested in tropical forests which have high structural heterogeneity. In this study, foliage density estimates by two indirect methods, the point quadrat and photographic methods, were compared with those obtained by direct leaf counts in the understorey of a wet evergreen forest in southern India. The point quadrat method has a tendency to overestimate, whereas the photographic method consistently and ignificantly underestimates foliage density. There was stratification within the understorey, with areas close to the ground having higher foliage densities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wood is an important material for the construction and pulping industries. Using x-ray diffraction the microfibril angle of Sitka spruce wood was studied in the first part of this thesis. Sitka spruce (Picea sitchensis [Bong.] Carr.) is native to the west coast of North America, but due to its fast growth rate, it has also been imported to Europe. So far, its nanometre scale properties have not been systematically characterised. In this thesis the microfibril angle of Sitka spruce was shown to depend significantly on the origin of the tree in the first annual rings near the pith. Wood can be further processed to separate lignin from cellulose and hemicelluloses. Solid cellulose can act as a reducer for metal ions and it is also a porous support for nanoparticles. By chemically reducing nickel or copper in the solid cellulose support it is possible to get small nanoparticles on the surfaces of the cellulose fibres. Cellulose supported metal nanoparticles can potentially be used as environmentally friendly catalysts in organic chemistry reactions. In this thesis the size of the nickel and copper containing nanoparticles were studied using anomalous small-angle x-ray scattering and wide-angle x-ray scattering. The anomalous small-angle x-ray scattering experiments showed that the crystallite size of the copper oxide nanoparticles was the same as the size of the nanoparticles, so the nanoparticles were single crystals. The nickel containing nanoparticles were amorphous, but crystallised upon heating. The size of the nanoparticles was observed to be smaller when the reduction of nickel was done in aqueous ammonium hydrate medium compared to reduction made in aqueous solution. Lignin is typically seen as the side-product of wood industries. Lignin is the second most abundant natural polymer on Earth, and it possesses potential to be a useful material for many purposes in addition to being an energy source for the pulp mills. In this thesis, the morphology of several lignins, which were produced by different separation methods from wood, was studied using small-angle and ultra small-angle x-ray scattering. It was shown that the fractal model previously proposed for the lignin structure does not apply to most of the extracted lignin types. The only lignin to which the fractal model could be applied was kraft lignin. In aqueous solutions the average shape of the low molar mass kraft lignin particles was observed to be elongated and flat. The average shape does not necessarily correspond to the shape of the individual particles because of the polydispersity of the fraction and due to selfassociation of the particles. Lignins, and especially lignosulfonate, have many uses as dispersants, binders and emulsion stabilisers. In this thesis work the selfassociation of low molar mass lignosulfonate macromolecules was observed using small-angle x-ray scattering. By taking into account the polydispersity of the studied lignosulfonate fraction, the shape of the lignosulfonate particles was determined to be flat by fitting an oblate ellipsoidal model to the scattering intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.