967 resultados para Impulse Facilities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background/Rationale Guided by the need-driven dementia-compromised behavior (NDB) model, this study examined influences of the physical environment on wandering behavior. Methods Using a descriptive, cross-sectional design, 122 wanderers from 28 long-term care (LTC) facilities were videotaped 10 to 12 times; data on wandering, light, sound, temperature and humidity levels, location, ambiance, and crowding were obtained. Associations between environmental variables and wandering were evaluated with chi-square and t tests; the model was evaluated using logistic regression. Results In all, 80% of wandering occurred in the resident’s own room, dayrooms, hallways, or dining rooms. When observed in other residents’ rooms, hallways, shower/baths, or off-unit locations, wanderers were likely (60%-92% of observations) to wander. The data were a good fit to the model overall (LR [logistic regression] χ2 (5) = 50.38, P < .0001) and by wandering type. Conclusions Location, light, sound, proximity of others, and ambiance are associated with wandering and may serve to inform environmental designs and care practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined the psychometric properties of an expanded version of the Algase Wandering Scale (Version 2) (AWS-V2) in a cross-cultural sample. A cross-sectional survey design was used. Study subjects were 172 English-speaking persons with dementia (PWD) from long-term care facilities in the USA, Canada, and Australia. Two or more facility staff rated each subject on the AWS-V2. Demographic and cognitive data (MMSE) were also obtained. Staff provided information on their own knowledge of the subject and of dementia. Separate factor analyses on data from two samples of raters each explained greater than 66% of the variance in AWS-V2 scores and validated four (persistent walking, navigational deficit, eloping behavior, and shadowing) of five factors in the original scale. Items added to create the AWS-V2 strengthened the shadowing subscale, failed to improve the routinized walking subscale, and added a factor, attention shifting as compared to the original AWS. Evidence for validity was found in significant correlations and ANOVAs between the AWS-V2 and most subscales with a single item indicator of wandering and with the MMSE. Evidence of reliability was shown by internal consistency of the AWS-V2 (0.87, 0.88) and its subscales (range 0.88 to 0.66), with Kappa for individual items (17 of 27 greater than 0.4), and ANOVAs comparing ratings across rater groups (nurses, nurse aids, and other staff). Analyses support validity and reliability of the AWS-V2 overall and for persistent walking, spatial disorientation, and eloping behavior subscales. The AWS-V2 and its subscales are an appropriate way to measure wandering as conceptualized within the Need-driven Dementia-compromised Behavior Model in studies of English-speaking subjects. Suggestions for further strengthening the scale and for extending its use to clinical applications are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To understand human behavior, it is important to know under what conditions people deviate from selfish rationality. This study explores the interaction of natural survival instincts and internalized social norms using data on the sinking of the Titanic and the Lusitania. We show that time pressure appears to be crucial when explaining behavior under extreme conditions of life and death. Even though the two vessels and the composition of their passengers were quite similar, the behavior of the individuals on board was dramatically different. On the Lusitania, selfish behavior dominated (which corresponds to the classical homo oeconomicus); on the Titanic, social norms and social status (class) dominated, which contradicts standard economics. This difference could be attributed to the fact that the Lusitania sank in 18 minutes, creating a situation in which the short-run flight impulse dominates behavior. On the slowly sinking Titanic (2 hours, 40 minutes), there was time for socially determined behavioral patterns to re-emerge. To our knowledge, this is the first time that these shipping disasters have been analyzed in a comparative manner with advanced statistical (econometric) techniques using individual data of the passengers and crew. Knowing human behavior under extreme conditions allows us to gain insights about how varied human behavior can be depending on differing external conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of asset management is not a new but an evolving idea that has been attracting attention of many organisations operating and/or owning some kind of infrastructure assets. The term asset management have been used widely with fundamental differences in interpretation and usage. Regardless of the context of the usage of the term, asset management implies the process of optimising return by scrutinising performance and making key strategic decisions throughout all phases of an assets lifecycle (Sarfi and Tao, 2004). Hence, asset management is a philosophy and discipline through which organisations are enabled to more effectively deploy their resources to provide higher levels of customer service and reliability while balancing financial objectives. In Australia, asset management made its way into the public works in 1993 when the Australian Accounting Standard Board issued the Australian Accounting Standard 27 – AAS27. Standard AAS27 required government agencies to capitalise and depreciate assets rather than expense them against earnings. This development has indirectly forced organisations managing infrastructure assets to consider the useful life and cost effectiveness of asset investments. The Australian State Treasuries and the Australian National Audit Office was the first organisation to formalise the concepts and principles of asset management in Australia in which they defined asset management as “ a systematic, structured process covering the whole life of an asset”(Australian National Audit Office, 1996). This initiative led other Government bodies and industry sectors to develop, refine and apply the concept of asset management in the management of their respective infrastructure assets. Hence, it can be argued that the concept of asset management has emerged as a separate and recognised field of management during the late 1990s. In comparison to other disciplines such as construction, facilities, maintenance, project management, economics, finance, to name a few, asset management is a relatively new discipline and is clearly a contemporary topic. The primary contributors to the literature in asset management are largely government organisations and industry practitioners. These contributions take the form of guidelines and reports on the best practice of asset management. More recently, some of these best practices have been made to become a standard such as the PAS 55 (IAM, 2004, IAM, 2008b) in UK. As such, current literature in this field tends to lack well-grounded theories. To-date, while receiving relatively more interest and attention from empirical researchers, the advancement of this field, particularly in terms of the volume of academic and theoretical development is at best moderate. A plausible reason for the lack of advancement is that many researchers and practitioners are still unaware of, or unimpressed by, the contribution that asset management can make to the performance of infrastructure asset. This paper seeks to explore the practices of organisations that manage infrastructure assets to develop a framework of strategic infrastructure asset management processes. It will begin by examining the development of asset management. This is followed by the discussion on the method to be adopted for this paper. Next, is the discussion of the result form case studies. It first describes the goals of infrastructure asset management and how they can support the broader business goals. Following this, a set of core processes that can support the achievement of business goals are provided. These core processes are synthesised based on the practices of asset managers in the case study organisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In India, poor feeding practices in early childhood contribute to the burden of malnutrition and infant and child mortality. Objective. To estimate infant and young child feeding indicators and determinants of selected feeding practices in India. Methods: The sample consisted of 20,108 children aged 0 to 23 months from the National Family Health Survey India 2005–06. Selected indicators were examined against a set of variables using univariate and multivariate analyses. Results: Only 23.5% of mothers initiated breastfeeding within the first hour after birth, 99.2% had ever breastfed their infant, 89.8% were currently breastfeeding, and 14.8% were currently bottle-feeding. Among infants under 6 months of age, 46.4% were exclusively breastfed, and 56.7% of those aged 6 to 9 months received complementary foods. The risk factors for not exclusively breastfeeding were higher household wealth index quintiles (OR for richest = 2.03), delivery in a health facility (OR = 1.35), and living in the Northern region. Higher numbers of antenatal care visits were associated with increased rates of exclusive breastfeeding (OR for ≥ 7 antenatal visits = 0.58). The rates of timely initiation of breastfeeding were higher among women who were better educated (OR for secondary education or above = 0.79), were working (OR = 0.79), made more antenatal clinic visits (OR for ≥ 7 antenatal visits = 0.48), and were exposed to the radio (OR = 0.76). The rates were lower in women who were delivered by cesarean section (OR = 2.52). The risk factors for bottle-feeding included cesarean delivery (OR = 1.44), higher household wealth index quintiles (OR = 3.06), working by the mother (OR=1.29), higher maternal education level (OR=1.32), urban residence (OR=1.46), and absence of postnatal examination (OR=1.24). The rates of timely complementary feeding were higher for mothers who had more antenatal visits (OR=0.57), and for those who watched television (OR=0.75). Conclusions: Revitalization of the Baby Friendly Hospital Initiative in health facilities is recommended. Targeted interventions may be necessary to improve infant feeding practices in mothers who reside in urban areas, are more educated, and are from wealthier households.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With an increasing level of collaboration amongst researchers, software developers and industry practitioners in the past three decades, building information modelling (BIM) is now recognized as an emerging technological and procedural shift within the architect, engineering and construction (AEC) industry. BIM is not only considered as a way to make a profound impact on the professions of AEC, but is also regarded as an approach to assist the industry to develop new ways of thinking and practice. Despite the widespread development and recognition of BIM, a succinct and systematic review of the existing BIM research and achievement is scarce. It is also necessary to take stock on existing applications and have a fresh look at where BIM should be heading and how it can benefit from the advances being made. This paper first presents a review of BIM research and achievement in AEC industry. A number of suggestions are then made for future research in BIM. This paper maintains that the value of BIM during design and construction phases is well documented over the last decade, and new research needs to expand the level of development and analysis from design/build stage to postconstruction and facility asset management. New research in BIM could also move beyond the traditional building type to managing the broader range of facilities and built assets and providing preventative maintenance schedules for sustainable and intelligent buildings

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In general, the performance of construction projects, including their sustainability performance, does not meet optimal expectations. One aspect of this is the performance of the participants who are independent and make a significance impact on overall project outcomes. Of these participants, the client is traditionally the owner of the project, the architect or engineer is engaged as the lead designer and a contractor is selected to construct the facilities. Generally, the performance of the participants is gauged by considering three main factors, namely, time, cost and quality. As the level of satisfaction is a subjective issue, it is rarely used in the performance evaluation of construction work. Recently, various approaches to the measurement of satisfaction have been made in an attempt to determine the performance of construction project outcomes - for instance, client satisfaction, customer satisfaction, contractor satisfaction, occupant satisfaction and home buyer satisfaction. These not only identify the performance of the construction project but are also used to improve and maintain relationships. In addition, these assessments are necessary for the continuous improvement and enhanced cooperation of participants. The measurement of satisfaction levels primarily involves expectations and perceptions. An expectation can be regarded as a comparative standard of different needs, motives and beliefs, while a perception is a subjective interpretation that is influenced by moods, experiences and values. This suggests that the disparity between perceptions and expectations may possibly be used to represent different levels of satisfaction. However, this concept is rather new and in need of further investigation. This chapter examines the methods commonly practised in measuring satisfaction levels today and the advantages of promoting these methods. The results provide a preliminary review of the advantages of satisfaction measurement in the construction industry and recommendations are made concerning the most appropriate methods to use in identifying the performance of project outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Patterns of diagnosis and management for men diagnosed with prostate cancer in Queensland, Australia, have not yet been systematically documented and so assumptions of equity are untested. This longitudinal study investigates the association between prostate cancer diagnostic and treatment outcomes and key area-level characteristics and individual-level demographic, clinical and psychosocial factors.---------- Methods/Design: A total of 1064 men diagnosed with prostate cancer between February 2005 and July 2007 were recruited through hospital-based urology outpatient clinics and private practices in the centres of Brisbane, Townsville and Mackay (82% of those referred). Additional clinical and diagnostic information for all 6609 men diagnosed with prostate cancer in Queensland during the study period was obtained via the population-based Queensland Cancer Registry. Respondent data are collected using telephone and self-administered questionnaires at pre-treatment and at 2 months, 6 months, 12 months, 24 months, 36 months, 48 months and 60 months post-treatment. Assessments include demographics, medical history, patterns of care, disease and treatment characteristics together with outcomes associated with prostate cancer, as well as information about quality of life and psychological adjustment. Complementary detailed treatment information is abstracted from participants’ medical records held in hospitals and private treatment facilities and collated with health service utilisation data obtained from Medicare Australia. Information about the characteristics of geographical areas is being obtained from data custodians such as the Australian Bureau of Statistics. Geo-coding and spatial technology will be used to calculate road travel distances from patients’ residences to treatment centres. Analyses will be conducted using standard statistical methods along with multilevel regression models including individual and area-level components.---------- Conclusions: Information about the diagnostic and treatment patterns of men diagnosed with prostate cancer is crucial for rational planning and development of health delivery and supportive care services to ensure equitable access to health services, regardless of geographical location and individual characteristics. This study is a secondary outcome of the randomised controlled trial registered with the Australian New Zealand Clinical Trials Registry (ACTRN12607000233426)