920 resultados para RESIDENCE TIME DISTRIBUTION


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose features extracted from the heart rate variability (HRV) based on the first and second conditional moments of time-frequency distribution (TFD) as an additional guide for seizure detection in newborn. The features of HRV in the low frequency band (LF: 0-0.07 Hz), mid frequency band (MF: 0.07-0.15 Hz), and high frequency band (HF: 0.15-0.6 Hz) have been obtained by means of the time-frequency analysis using the modified-B distribution (MBD). Results of ongoing time-frequency research are presented. Based on our preliminary results, the first conditional moment of HRV which is also known as the mean/central frequency in the LF band and the second conditional moment of HRV which is also known as the variance/instantaneous bandwidth (IB) in the HF band can be used as a good feature to discriminate the newborn seizure from the non-seizure

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An experimental method for characterizing the time-resolved phase noise of a fast switching tunable laser is discussed. The method experimentally determines a complementary cumulative distribution function of the laser's differential phase as a function of time after a switching event. A time resolved bit error rate of differential quadrature phase shift keying formatted data, calculated using the phase noise measurements, was fitted to an experimental time-resolved bit error rate measurement using a field programmable gate array, finding a good agreement between the time-resolved bit error rates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dispersion characteristics of respiratory droplets in indoor environments are of special interest in controlling transmission of airborne diseases. This study adopts an Eulerian method to investigate the spatial concentration distribution and temporal evolution of exhaled and sneezed/coughed droplets within the range of 1.0~10.0μm in an office room with three air distribution methods, i.e. mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). The diffusion, gravitational settling, and deposition mechanism of particulate matters are well accounted in the one-way coupling Eulerian approach. The simulation results find that exhaled droplets with diameters up to 10.0μm from normal respiration process are uniformly distributed in MV, while they are trapped in the breathing height by thermal stratifications in DV and UFAD, resulting in a high droplet concentration and a high exposure risk to other occupants. Sneezed/coughed droplets are diluted much slower in DV/UFAD than in MV. Low air speed in the breathing zone in DV/UFAD can lead to prolonged residence of droplets in the breathing zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bag sampling techniques can be used to temporarily store an aerosol and therefore provide sufficient time to utilize sensitive but slow instrumental techniques for recording detailed particle size distributions. Laboratory based assessment of the method were conducted to examine size dependant deposition loss coefficients for aerosols held in VelostatTM bags conforming to a horizontal cylindrical geometry. Deposition losses of NaCl particles in the range of 10 nm to 160 nm were analysed in relation to the bag size, storage time, and sampling flow rate. Results of this study suggest that the bag sampling method is most useful for moderately short sampling periods of about 5 minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transportation disadvantage has been recognised to be the key source of social exclusion. Therefore an appropriate process is required to investigate and seek to resolve this problem. Currently, determination of Transportation Disadvantage is postulate based on income, poverty and mobility level. Transportation disadvantage may best regard be based on accessibility perspectives as they represent inability of the individual to access desired activities. This paper attempts to justify a process in determining transportation disadvantage by incorporating accessibility and social transporation conflict as the essence of a framework. The framework embeds space time organisation within the dimension of accessibility to identify a rigorous definition of transportation disadvantage. In developing the framework, the definition, dimension, component and measure of accessibility were scrutinised. The findings suggest the definition and dimension are the significant approach of research to evaluate travel experience of the disadvantaged. Concurrently, location accessibility measures will be incorprated to strenghten the determination of accessibility level. Literature review in social exclusion and mobility-related exclusion identified the dimension and source of transportation disadvantage. It was revealed that the appropriate approach to justify trasnportation disadvantaged is to incorporate space-time organisation within the studied components. The suggested framework is an inter-related process consisting of component of accessibility; individual, networking (transport system) and activities (destination). The integration and correlation among the components shall determine the level of transportation disadvantage. Prior findings are used to retrieve the spatial distribution of transportation disadvantaged and appropriate policies are developed to resolve the problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study an overlapping-generations model in which agents' mortality risks, and consequently impatience, are endogenously determined by private and public investment in health care. Revenues allocated for public health care arc determined by a voting process. We find that the degree of substitutability between public and private health expenditures matters for macroeconomic outcomes of the model. Higher substitutability implies a “crowding-out" effect, which in turn impacts adversely on morality risks and impatience leading to lower public expenditures on health care in the political equilibrium. Consequently, higher substitutability is associated with greater polarization in wealth, and long-run distributions that are bimodal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the optimal allocation and sizing of distributed generators (DGs) in a distribution system is studied. To achieve this goal, an optimization problem should be solved in which the main objective is to minimize the DGs cost and to maximise the reliability simultaneously. The active power balance between loads and DGs during the isolation time is used as a constraint. Another point considered in this process is the load shedding. It means that if the summation of DGs active power in a zone, isolated by the sectionalizers because of a fault, is less than the total active power of loads located in that zone, the program start shedding the loads in one-by-one using the priority rule still the active power balance is satisfied. This assumption decreases the reliability index, SAIDI, compared with the case loads in a zone are shed when total DGs power is less than the total load power. To validate the proposed method, a 17-bus distribution system is employed and the results are analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: This study examined the quality of life (QOL), measured by the Functional Assessment of Cancer Therapy (FACT) questionnaire, among urban (n=277) and non-urban (n=323) breast cancer survivors and women from the general population (n=1140) in Queensland, Australia. ---------- Methods: Population-based samples of breast cancer survivors aged <75 years who were 12 months post-diagnosis and similarly-aged women from the general population were recruited between 2002 and 2007. ---------- Results: Age-adjusted QOL among urban and non-urban breast cancer survivors was similar, although QOL related to breast cancer concerns was the weakest domain and was lower among non-urban survivors than their urban counterparts (36.8 versus 40.4, P<0.01). Irrespective of residence, breast cancer survivors, on average, reported comparable scores on most QOL scales as their general population peers, although physical well-being was significantly lower among non-urban survivors (versus the general population, P<0.01). Overall, around 20%-33% of survivors experienced lower QOL than peers without the disease. The odds of reporting QOL below normative levels were increased more than two-fold for those who experienced complications following surgery, reported upper-body problems, had higher perceived stress levels and/or a poor perception of handling stress (P<0.01 for all). ---------- Conclusions: Results can be used to identify subgroups of women at risk of low QOL and to inform components of tailored recovery interventions to optimize QOL for these women following cancer treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the Mater Children’s Hospital, approximately 80% of patients presenting with Adolescent Idiopathic Scoliosis requiring corrective surgery receive a fulcrum bending radiograph. The fulcrum bending radiograph provides a measurement of spine flexibility and a better indication of achievable surgical correction than lateral-bending radiographs (Cheung and Luk, 1997; Hay et al 2008). The magnitude and distribution of the corrective force exerted by the bolster on the patient’s body is unknown. The objective of this pilot study was to measure, for the first time, the forces transmitted to the patient’s ribs through the bolster during the fulcrum bending radiograph.