904 resultados para Glomerular filtration rate estimation
Resumo:
Background A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7–December 31, 2009, at a postal area level in Queensland, Australia. Method We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space–time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. Results The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: −0.341; 95% credible interval (CI): −0.370–−0.311) and the socio-economic index for area (SEIFA) (posterior mean: −0.003; 95% CI: −0.004–−0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007–0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Conclusions Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period.
Resumo:
A nonlinear control design approach is presented in this paper for a challenging application problem of ensuring robust performance of an air-breathing engine operating at supersonic speed. The primary objective of control design is to ensure that the engine produces the required thrust that tracks the commanded thrust as closely as possible by appropriate regulation of the fuel flow rate. However, since the engine operates in the supersonic range, an important secondary objective is to ensure an optimal location of the shock in the intake for maximum pressure recovery with a sufficient margin. This is manipulated by varying the throat area of the nozzle. The nonlinear dynamic inversion technique has been successfully used to achieve both of the above objectives. In this problem, since the process is faster than the actuators, independent control designs have also been carried out for the actuators as well to assure the satisfactory performance of the system. Moreover, an extended Kalman Filter based state estimation design has been carried out both to filter out the process and sensor noises as well as to make the control design operate based on output feedback. Promising simulation results indicate that the proposed control design approach is quite successful in obtaining robust performance of the air-breathing system.
Resumo:
The glomerular epithelial cells and their intercellular junctions, termed slit diaphragms, are essential components of the filtration barrier in the kidney glomerulus. Nephrin is a transmembrane adhesion protein of the slit diaphragm and a signalling molecule regulating podocyte physiology. In congenital nephrotic syndrome of the Finnish type, mutation of nephrin leads to disruption of the permeability barrier and leakage of plasma proteins into the urine. This doctoral thesis hypothesises that novel nephrin-associated molecules are involved in the function of the filtration barrier in health and disease. Bioinformatics tools were utilized to identify novel nephrin-like molecules in genomic databases, and their distribution in the kidney and other tissues was investigated. Filtrin, a novel nephrin homologue, is expressed in the glomerular podocytes and, according to immunoelectron microscopy, localizes at the slit diaphragm. Interestingly, the nephrin and filtrin genes, NPHS1 and KIRREL2, locate in a head-to-head orientation on chromosome 19q13.12. Another nephrin-like molecule, Nphs1as was cloned in mouse, however, no expression was detected in the kidney but instead in the brain and lymphoid tissue. Notably, Nphs1as is transcribed from the nephrin locus in an antisense orientation. The glomerular mRNA and protein levels of filtrin were measured in kidney biopsies of patients with proteinuric diseases, and marked reduction of filtrin mRNA levels was detected in the proteinuric samples as compared to controls. In addition, altered distribution of filtrin in injured glomeruli was observed, with the most prominent decrease of the expression in focal segmental glomerulosclerosis. The role of the slit diaphragm-associated genes for the development of diabetic nephropathy was investigated by analysing single nucleotide polymorphisms. The genes encoding filtrin, densin-180, NEPH1, podocin, and alpha-actinin-4 were analysed, and polymorphisms at the alpha-actinin-4 gene were associated with diabetic nephropathy in a gender-dependent manner. Filtrin is a novel podocyte-expressed protein with localization at the slit diaphragm, and the downregulation of filtrin seems to be characteristic for human proteinuric diseases. In the context of the crucial role of nephrin for the glomerular filter, filtrin appears to be a potential candidate molecule for proteinuria. Although not expressed in the kidney, the nephrin antisense Nphs1as may regulate the expression of nephrin in extrarenal tissues. The genetic association analysis suggested that the alpha-actinin-4 gene, encoding an actin-filament cross-linking protein of the podocytes, may contribute to susceptibility for diabetic nephropathy.
Resumo:
We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.
Resumo:
This paper deals with the development of simplified semi-empirical relations for the prediction of residual velocities of small calibre projectiles impacting on mild steel target plates, normally or at an angle, and the ballistic limits for such plates. It has been shown, for several impact cases for which test results on perforation of mild steel plates are available, that most of the existing semi-empirical relations which are applicable only to normal projectile impact do not yield satisfactory estimations of residual velocity. Furthermore, it is difficult to quantify some of the empirical parameters present in these relations for a given problem. With an eye towards simplicity and ease of use, two new regression-based relations employing standard material parameters have been discussed here for predicting residual velocity and ballistic limit for both normal and oblique impact. The latter expressions differ in terms of usage of quasi-static or strain rate-dependent average plate material strength. Residual velocities yielded by the present semi-empirical models compare well with the experimental results. Additionally, ballistic limits from these relations show close correlation with the corresponding finite element-based predictions.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
Long-range transport of continental dust makes these particles a significant constituent even at locations far from their sources. It is important to study the temporal variations in dust loading over desert regions and the role of meteorology, in order to assess its radiative impact. In this paper, infrared radiance (10.5-12.5 mu m), acquired by the METEOSAT-5 satellite (similar to 5-km resolution) during 1999 and 2003 was used to quantify wind dependence of dust aerosols and to estimate the radiative forcing. Our analysis shows that the frequency of occurrence of dust events was higher during 2003 compared to 1999. Since the dust production function depends mainly on the surface wind speed over regions which are dry and without vegetation, the role of surface wind on IDDI was examined in detail. It was found that an increase of IDDI with wind speed was nearly linear and the rate of increase in IDDI with surface wind was higher during 2003 compared to 1999. It was also observed that over the Indian desert, when wind speed was the highest during monsoon months (June to August), the dust production rate was lower because of higher soil moisture (due to monsoon rainfall). Over the Arabian deserts, when the wind speed is the highest during June to August, the dust production rate is also highest, as soil moisture is lowest during this season. Even though nothing can be said precisely on the reason why 2003 had a greater number of dust events, examination of monthly mean soil moisture at source regions indicates that the occurrence of high winds simultaneous with high soil moisture could be the reason for the decreased dust production efficiency in 1999. It appears that the deserts of Northwest India are more efficient dust sources compared to the deserts of Saudi Arabia and Northeast Africa (excluding Sahara). The radiative impact of dust over various source regions is estimated, and the regionally and annually averaged top of the atmosphere dust radiative forcing (short wave, clear-sky and over land) over the entire study region (0-35 degrees N; 30 degrees-100 degrees E) was in the range of -0.9 to +4.5 W m(-2). The corresponding values at the surface were in the range of -10 to -25 W m(-2). Our studies demonstrate that neglecting the diurnal variation of dust can cause errors in the estimation of long wave dust forcing by as much as 50 to 100%, and nighttime retrieval of dust can significantly reduce the uncertainties. A method to retrieve dust aerosols during nighttime is proposed. The regionally and annually averaged long wave dust radiative forcing was +3.4 +/- 1.6 W m(-2).
Resumo:
In this paper, we propose a training-based channel estimation scheme for large non-orthogonal space-time block coded (STBC) MIMO systems.The proposed scheme employs a block transmission strategy where an N-t x N-t pilot matrix is sent (for training purposes) followed by several N-t x N-t square data STBC matrices, where Nt is the number of transmit antennas. At the receiver, we iterate between channel estimation (using an MMSE estimator) and detection (using a low-complexity likelihood ascent search (LAS) detector) till convergence or for a fixed number of iterations. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed scheme at low complexities. The fact that we could show such good results for large STBCs (e.g., 16 x 16 STBC from cyclic division algebras) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot-based channel estimation and turbo coding) establishes the effectiveness of the proposed scheme.
Resumo:
This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.
Resumo:
The majority of Internet traffic use Transmission Control Protocol (TCP) as the transport level protocol. It provides a reliable ordered byte stream for the applications. However, applications such as live video streaming place an emphasis on timeliness over reliability. Also a smooth sending rate can be desirable over sharp changes in the sending rate. For these applications TCP is not necessarily suitable. Rate control attempts to address the demands of these applications. An important design feature in all rate control mechanisms is TCP friendliness. We should not negatively impact TCP performance since it is still the dominant protocol. Rate Control mechanisms are classified into two different mechanisms: window-based mechanisms and rate-based mechanisms. Window-based mechanisms increase their sending rate after a successful transfer of a window of packets similar to TCP. They typically decrease their sending rate sharply after a packet loss. Rate-based solutions control their sending rate in some other way. A large subset of rate-based solutions are called equation-based solutions. Equation-based solutions have a control equation which provides an allowed sending rate. Typically these rate-based solutions react slower to both packet losses and increases in available bandwidth making their sending rate smoother than that of window-based solutions. This report contains a survey of rate control mechanisms and a discussion of their relative strengths and weaknesses. A section is dedicated to a discussion on the enhancements in wireless environments. Another topic in the report is bandwidth estimation. Bandwidth estimation is divided into capacity estimation and available bandwidth estimation. We describe techniques that enable the calculation of a fair sending rate that can be used to create novel rate control mechanisms.
Resumo:
The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.
Resumo:
The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.
Resumo:
A method is described for estimating the incremental angle and angular velocity of a spacecraft using integrated rate parameters with the help of a star sensor alone. The chief advantage of this method is that the measured stars need not be identified, whereas the identification of the stars is necessary in earlier methods. This proposed estimation can be carried out with all of the available measurements by a simple linear Kalman filter, albeit with a time-varying sensitivity matrix. The residuals of estimated angular velocity by the proposed spacecraft incremental-angle and angular velocity estimation method are as accurate as the earlier methods. This method also enables the spacecraft attitude to be reconstructed for mapping the stars into an imaginary unit sphere in the body reference frame, which will preserve the true angular separation of the stars. This will pave the way for identification of the stars using any angular separation or triangle matching techniques applied to even a narrow field of view sensor that is made to sweep the sky. A numerical simulation for inertial as well as Earth pointing spacecraft is carried out to establish the results.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
Distributed space-time block codes (DSTBCs) from complex orthogonal designs (CODs) (both square and nonsquare), coordinate interleaved orthogonal designs (CIODs), and Clifford unitary weight designs (CUWDs) are known to lose their single-symbol ML decodable (SSD) property when used in two-hop wireless relay networks using amplify and forward protocol. For such networks, in this paper, three new classes of high rate, training-symbol embedded (TSE) SSD DSTBCs are constructed: TSE-CODs, TSE-CIODs, and TSE-CUWDs. The proposed codes include the training symbols inside the structure of the code which is shown to be the key point to obtain the SSD property along with the channel estimation capability. TSE-CODs are shown to offer full-diversity for arbitrary complex constellations and the constellations for which TSE-CIODs and TSE-CUWDs offer full-diversity are characterized. It is shown that DSTBCs from nonsquare TSE-CODs provide better rates (in symbols per channel use) when compared to the known SSD DSTBCs for relay networks. Important from the practical point of view, the proposed DSTBCs do not contain any zeros in their codewords and as a result, antennas of the relay nodes do not undergo a sequence of switch on/off transitions within every codeword, and, thus, avoid the antenna switching problem.