944 resultados para Almost Optimal Density Function
Resumo:
We propose a generalization of the persistent random walk for dimensions greater than 1. Based on a cubic lattice, the model is suitable for an arbitrary dimension d. We study the continuum limit and obtain the equation satisfied by the probability density function for the position of the random walker. An exact solution is obtained for the projected motion along an axis. This solution, which is written in terms of the free-space solution of the one-dimensional telegraphers equation, may open a new way to address the problem of light propagation through thin slabs.
Resumo:
The stromal scaffold of the lymph node (LN) paracortex is built by fibroblastic reticular cells (FRCs). Conditional ablation of lymphotoxin-β receptor (LTβR) expression in LN FRCs and their mesenchymal progenitors in developing LNs revealed that LTβR-signaling in these cells was not essential for the formation of LNs. Although T cell zone reticular cells had lost podoplanin expression, they still formed a functional conduit system and showed enhanced expression of myofibroblastic markers. However, essential immune functions of FRCs, including homeostatic chemokine and interleukin-7 expression, were impaired. These changes in T cell zone reticular cell function were associated with increased susceptibility to viral infection. Thus, myofibroblasic FRC precursors are able to generate the basic T cell zone infrastructure, whereas LTβR-dependent maturation of FRCs guarantees full immunocompetence and hence optimal LN function during infection.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
A discussion on the expression proposed in [1]–[3]for deconvolving the wideband density function is presented. Weprove here that such an expression reduces to be proportionalto the wideband correlation receiver output, or continuous wavelettransform of the received signal with respect to the transmittedone. Moreover, we show that the same result has been implicitlyassumed in [1], when the deconvolution equation is derived. Westress the fact that the analyzed approach is just the orthogonalprojection of the density function onto the image of the wavelettransform with respect to the transmitted signal. Consequently,the approach can be considered a good representation of thedensity function only under the prior knowledge that the densityfunction belongs to such a subspace. The choice of the transmittedsignal is thus crucial to this approach.
Resumo:
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time, are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.
Resumo:
The liver is a key organ of metabolic homeostasis with functions that oscillate in response to food intake. Although liver and gut microbiome crosstalk has been reported, microbiome-mediated effects on peripheral circadian clocks and their output genes are less well known. Here, we report that germ-free (GF) mice display altered daily oscillation of clock gene expression with a concomitant change in the expression of clock output regulators. Mice exposed to microbes typically exhibit characterized activities of nuclear receptors, some of which (PPARα, LXRβ) regulate specific liver gene expression networks, but these activities are profoundly changed in GF mice. These alterations in microbiome-sensitive gene expression patterns are associated with daily alterations in lipid, glucose, and xenobiotic metabolism, protein turnover, and redox balance, as revealed by hepatic metabolome analyses. Moreover, at the systemic level, daily changes in the abundance of biomarkers such as HDL cholesterol, free fatty acids, FGF21, bilirubin, and lactate depend on the microbiome. Altogether, our results indicate that the microbiome is required for integration of liver clock oscillations that tune output activators and their effectors, thereby regulating metabolic gene expression for optimal liver function.
Resumo:
The liver is a key organ of metabolic homeostasis with functions that oscillate in response to food intake. Although liver and gut microbiome crosstalk has been reported, microbiome-mediated effects on peripheral circadian clocks and their output genes are less well known. Here, we report that germ-free (GF) mice display altered daily oscillation of clock gene expression with a concomitant change in the expression of clock output regulators. Mice exposed to microbes typically exhibit characterized activities of nuclear receptors, some of which (PPARα, LXRβ) regulate specific liver gene expression networks, but these activities are profoundly changed in GF mice. These alterations in microbiome-sensitive gene expression patterns are associated with daily alterations in lipid, glucose, and xenobiotic metabolism, protein turnover, and redox balance, as revealed by hepatic metabolome analyses. Moreover, at the systemic level, daily changes in the abundance of biomarkers such as HDL cholesterol, free fatty acids, FGF21, bilirubin, and lactate depend on the microbiome. Altogether, our results indicate that the microbiome is required for integration of liver clock oscillations that tune output activators and their effectors, thereby regulating metabolic gene expression for optimal liver function.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
This Thesis discusses the phenomenology of the dynamics of open quantum systems marked by non-Markovian memory effects. Non-Markovian open quantum systems are the focal point of a flurry of recent research aiming to answer, e.g., the following questions: What is the characteristic trait of non-Markovian dynamical processes that discriminates it from forgetful Markovian dynamics? What is the microscopic origin of memory in quantum dynamics, and how can it be controlled? Does the existence of memory effects open new avenues and enable accomplishments that cannot be achieved with Markovian processes? These questions are addressed in the publications forming the core of this Thesis with case studies of both prototypical and more exotic models of open quantum systems. In the first part of the Thesis several ways of characterizing and quantifying non-Markovian phenomena are introduced. Their differences are then explored using a driven, dissipative qubit model. The second part of the Thesis focuses on the dynamics of a purely dephasing qubit model, which is used to unveil the origin of non-Markovianity for a wide class of dynamical models. The emergence of memory is shown to be strongly intertwined with the structure of the spectral density function, as further demonstrated in a physical realization of the dephasing model using ultracold quantum gases. Finally, as an application of memory effects, it is shown that non- Markovian dynamical processes facilitate a novel phenomenon of timeinvariant discord, where the total quantum correlations of a system are frozen to their initial value. Non-Markovianity can also be exploited in the detection of phase transitions using quantum information probes, as shown using the physically interesting models of the Ising chain in a transverse field and a Coulomb chain undergoing a structural phase transition.
Resumo:
Since the times preceding the Second World War the subject of aircraft tracking has been a core interest to both military and non-military aviation. During subsequent years both technology and configuration of the radars allowed the users to deploy it in numerous fields, such as over-the-horizon radar, ballistic missile early warning systems or forward scatter fences. The latter one was arranged in a bistatic configuration. The bistatic radar has continuously re-emerged over the last eighty years for its intriguing capabilities and challenging configuration and formulation. The bistatic radar arrangement is used as the basis of all the analyzes presented in this work. The aircraft tracking method of VHF Doppler-only information, developed in the first part of this study, is solely based on Doppler frequency readings in relation to time instances of their appearance. The corresponding inverse problem is solved by utilising a multistatic radar scenario with two receivers and one transmitter and using their frequency readings as a base for aircraft trajectory estimation. The quality of the resulting trajectory is then compared with ground-truth information based on ADS-B data. The second part of the study deals with the developement of a method for instantaneous Doppler curve extraction from within a VHF time-frequency representation of the transmitted signal, with a three receivers and one transmitter configuration, based on a priori knowledge of the probability density function of the first order derivative of the Doppler shift, and on a system of blocks for identifying, classifying and predicting the Doppler signal. The extraction capabilities of this set-up are tested with a recorded TV signal and simulated synthetic spectrograms. Further analyzes are devoted to more comprehensive testing of the capabilities of the extraction method. Besides testing the method, the classification of aircraft is performed on the extracted Bistatic Radar Cross Section profiles and the correlation between them for different types of aircraft. In order to properly estimate the profiles, the ADS-B aircraft location information is adjusted based on extracted Doppler frequency and then used for Bistatic Radar Cross Section estimation. The classification is based on seven types of aircraft grouped by their size into three classes.
Resumo:
Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m) and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test). Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.
Resumo:
This thesis investigated the modulation of dynamic contractile function and energetics of work by posttetanic potentiation (PTP). Mechanical experiments were conducted in vitro using software-controlled protocols to stimulate/determine contractile function during ramp shortening, and muscles were frozen during parallel incubations for biochemical analysis. The central feature of this research was the comparison of fast hindlimb muscles from wildtype and skeletal myosin light chain kinase knockout (skMLCK-/-) mice that does not express the primary mechanism for PTP: myosin regulatory light chain (RLC) phosphorylation. In contrast to smooth/cardiac muscles where RLC phosphorylation is indispensable, its precise physiological role in skeletal muscle is unclear. It was initially determined that tetanic potentiation was shortening speed dependent, and this sensitivity of the PTP mechanism to muscle shortening extended the stimulation frequency domain over which PTP was manifest. Thus, the physiological utility of RLC phosphorylation to augment contractile function in vivo may be more extensive than previously considered. Subsequent experiments studied the contraction-type dependence for PTP and demonstrated that the enhancement of contractile function was dependent on force level. Surprisingly, in the absence of RLC phosphorylation, skMLCK-/- muscles exhibited significant concentric PTP; consequently, up to ~50% of the dynamic PTP response in wildtype muscle may be attributed to an alternate mechanism. When the interaction of PTP and the catchlike property (CLP) was examined, we determined that unlike the acute augmentation of peak force by the CLP, RLC phosphorylation produced a longer-lasting enhancement of force and work in the potentiated state. Nevertheless, despite the apparent interference between these mechanisms, both offer physiological utility and may be complementary in achieving optimal contractile function in vivo. Finally, when the energetic implications of PTP were explored, we determined that during a brief period of repetitive concentric activation, total work performed was ~60% greater in wildtype vs. skMLCK-/- muscles but there was no genotype difference in High-Energy Phosphate Consumption or Economy (i.e. HEPC: work). In summary, this thesis provides novel insight into the modulatory effects of PTP and RLC phosphorylation, and through the observation of alternative mechanisms for PTP we further develop our understanding of the history-dependence of fast skeletal muscle function.
Resumo:
On présente une nouvelle approche de simulation pour la fonction de densité conjointe du surplus avant la ruine et du déficit au moment de la ruine, pour des modèles de risque déterminés par des subordinateurs de Lévy. Cette approche s'inspire de la décomposition "Ladder height" pour la probabilité de ruine dans le Modèle Classique. Ce modèle, déterminé par un processus de Poisson composé, est un cas particulier du modèle plus général déterminé par un subordinateur, pour lequel la décomposition "Ladder height" de la probabilité de ruine s'applique aussi. La Fonction de Pénalité Escomptée, encore appelée Fonction Gerber-Shiu (Fonction GS), a apporté une approche unificatrice dans l'étude des quantités liées à l'événement de la ruine été introduite. La probabilité de ruine et la fonction de densité conjointe du surplus avant la ruine et du déficit au moment de la ruine sont des cas particuliers de la Fonction GS. On retrouve, dans la littérature, des expressions pour exprimer ces deux quantités, mais elles sont difficilement exploitables de par leurs formes de séries infinies de convolutions sans formes analytiques fermées. Cependant, puisqu'elles sont dérivées de la Fonction GS, les expressions pour les deux quantités partagent une certaine ressemblance qui nous permet de nous inspirer de la décomposition "Ladder height" de la probabilité de ruine pour dériver une approche de simulation pour cette fonction de densité conjointe. On présente une introduction détaillée des modèles de risque que nous étudions dans ce mémoire et pour lesquels il est possible de réaliser la simulation. Afin de motiver ce travail, on introduit brièvement le vaste domaine des mesures de risque, afin d'en calculer quelques unes pour ces modèles de risque. Ce travail contribue à une meilleure compréhension du comportement des modèles de risques déterminés par des subordinateurs face à l'éventualité de la ruine, puisqu'il apporte un point de vue numérique absent de la littérature.