926 resultados para Econometric methods of discrete choice
Resumo:
Over the last couple of decades, many methods for synchronizing chaotic systems have been proposed with communications applications in view. Yet their performance has proved disappointing in face of the nonideal character of usual channels linking transmitter and receiver, that is, due to both noise and signal propagation distortion. Here we consider a discrete-time master-slave system that synchronizes despite channel bandwidth limitations and an allied communication system. Synchronization is achieved introducing a digital filter that limits the spectral content of the feedback loop responsible for producing the transmitted signal. Copyright (C) 2009 Marcio Eisencraft et al.
Resumo:
This paper presents an agent-based approach to modelling individual driver behaviour under the influence of real-time traffic information. The driver behaviour models developed in this study are based on a behavioural survey of drivers which was conducted on a congested commuting corridor in Brisbane, Australia. Commuters' responses to travel information were analysed and a number of discrete choice models were developed to determine the factors influencing drivers' behaviour and their propensity to change route and adjust travel patterns. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined. A case study implementing a simple agent-based route choice decision model within a microscopic traffic simulation tool is also presented. Driver-vehicle units (DVUs) were modelled as autonomous software components that can each be assigned a set of goals to achieve and a database of knowledge comprising certain beliefs, intentions and preferences concerning the driving task. Each DVU provided route choice decision-making capabilities, based on perception of its environment, that were similar to the described intentions of the driver it represented. The case study clearly demonstrated the feasibility of the approach and the potential to develop more complex driver behavioural dynamics based on the belief-desire-intention agent architecture. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The aim of this paper is twofold: firstly, to carry out a theoreticalreview of the most recent stated preference techniques used foreliciting consumers preferences and, secondly, to compare the empiricalresults of two dierent stated preference discrete choice approaches.They dier in the measurement scale for the dependent variable and,therefore, in the estimation method, despite both using a multinomiallogit. One of the approaches uses a complete ranking of full-profiles(contingent ranking), that is, individuals must rank a set ofalternatives from the most to the least preferred, and the other usesa first-choice rule in which individuals must select the most preferredoption from a choice set (choice experiment). From the results werealize how important the measurement scale for the dependent variablebecomes and, to what extent, procedure invariance is satisfied.
Resumo:
BACKGROUND: Protein-energy malnutrition is highly prevalent in aged populations. Associated clinical, economic, and social burden is important. A valid screening method that would be robust and precise, but also easy, simple, and rapid to apply, is essential for adequate therapeutic management. OBJECTIVES: To compare the interobserver variability of 2 methods measuring food intake: semiquantitative visual estimations made by nurses versus calorie measurements performed by dieticians on the basis of standardized color digital photographs of servings before and after consumption. DESIGN: Observational monocentric pilot study. SETTING/PARTICIPANTS: A geriatric ward. The meals were randomly chosen from the meal tray. The choice was anonymous with respect to the patients who consumed them. MEASUREMENTS: The test method consisted of the estimation of calorie consumption by dieticians on the basis of standardized color digital photographs of servings before and after consumption. The reference method was based on direct visual estimations of the meals by nurses. Food intake was expressed in the form of a percentage of the serving consumed and calorie intake was then calculated by a dietician based on these percentages. The methods were applied with no previous training of the observers. Analysis of variance was performed to compare their interobserver variability. RESULTS: Of 15 meals consumed and initially examined, 6 were assessed with each method. Servings not consumed at all (0% consumption) or entirely consumed by the patient (100% consumption) were not included in the analysis so as to avoid systematic error. The digital photography method showed higher interobserver variability in calorie intake estimations. The difference between the compared methods was statistically significant (P < .03). CONCLUSIONS: Calorie intake measures for geriatric patients are more concordant when estimated in a semiquantitative way. Digital photography for food intake estimation without previous specific training of dieticians should not be considered as a reference method in geriatric settings, as it shows no advantages in terms of interobserver variability.
Resumo:
Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.
Resumo:
OBJECTIVE: We examined the influence of clinical, radiologic, and echocardiographic characteristics on antithrombotic choice in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO), hypothesizing that features suggestive of paradoxical embolism might lead to greater use of anticoagulation. METHODS: The Risk of Paradoxical Embolism Study combined 12 databases to create the largest dataset of patients with CS and known PFO status. We used generalized linear mixed models with a random effect of component study to explore whether anticoagulation was preferentially selected based on the following: (1) younger age and absence of vascular risk factors, (2) "high-risk" echocardiographic features, and (3) neuroradiologic findings. RESULTS: A total of 1,132 patients with CS and PFO treated with anticoagulation or antiplatelets were included. Overall, 438 participants (39%) were treated with anticoagulation with a range (by database) of 22% to 54%. Treatment choice was not influenced by age or vascular risk factors. However, neuroradiologic findings (superficial or multiple infarcts) and high-risk echocardiographic features (large shunts, shunt at rest, and septal hypermobility) were predictors of anticoagulation use. CONCLUSION: Both antithrombotic regimens are widely used for secondary stroke prevention in patients with CS and PFO. Radiologic and echocardiographic features were strongly associated with treatment choice, whereas conventional vascular risk factors were not. Prior observational studies are likely to be biased by confounding by indication.
Resumo:
The purpose of this study was to determine novice t~ache~s' perceptions of th~ extent to which the Brock University teacher education program focused on strategies for promoting responsibility in students. Individual interviews were conducted with ten randomly selected teachers who were graduates of this teacher education program between the years of 1989 and 1992, and a follow-up group discussion activity, with the same teachers, was also held. Findings revealed that the topic of personal responsibility was discussed within various components of the program, including counselling group sessions, but that these discussions were often brief, indirect and inconsistent. Some of the strategies which the teachers used in their own classrooms to promote responsibility in students were ones which they had acquired from those counselling group °sessions or from associate teachers. Various strategies included: setting ~lear expectations of students with positive and negative consequences for behaviour (e.g., material rewards and detentions, respectively), cemmunic?ting'with other teachers an~ parents, and -. suspending students from school. A teacher's choice of any particular strategy seemed to be affected by his or her personality, teaching sUbject and region of employment, as well as certain aspects of the teacher education program. It was concluded that many of the teachers appeared to be controlling rude and vio~ent- behaviour, as opposed to promoting responsible behaviour. Recommendations were made for the pre-service program, as well as induction and inservice programs, to increase teacher preparedness for promoting responsible student behaviour. One of these recommendations addressed the need to help teachers learn how to effectively communicate with their students.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.
Resumo:
The linear quadratic Gaussian control of discrete-time Markov jump linear systems is addressed in this paper, first for state feedback, and also for dynamic output feedback using state estimation. in the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (T N), or the occurrence of a crucial failure event (τ δ), after which the system paralyzed. From the constructive method used here a separation principle holds, and the solutions are given in terms of a Kalman filter and a state feedback sequence of controls. The control gains are obtained by recursions from a set of algebraic Riccati equations for the former case or by a coupled set of algebraic Riccati equation for the latter case. Copyright © 2005 IFAC.
Resumo:
In this paper, we show how to compute in O(n2) steps the Fourier coefficients associated with the Gelfand-Levitan approach for discrete Sobolev orthogonal polynomials on the unit circle when the support of the discrete component involving derivatives is located outside the closed unit disk. As a consequence, we deduce the outer relative asymptotics of these polynomials in terms of those associated with the original orthogonality measure. Moreover, we show how to recover the discrete part of our Sobolev inner product. © 2013 Elsevier Inc. All rights reserved.
Resumo:
The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.
Resumo:
Clenshaw’s recurrenee formula is used to derive recursive algorithms for the discrete cosine transform @CT) and the inverse discrete cosine transform (IDCT). The recursive DCT algorithm presented here requires one fewer delay element per coefficient and one fewer multiply operation per coeflident compared with two recently proposed methods. Clenshaw’s recurrence formula provides a unified development for the recursive DCT and IDCT algorithms. The M v e al gorithms apply to arbitrary lengtb algorithms and are appropriate for VLSI implementation.
Resumo:
Digital signal processing (DSP) techniques for biological sequence analysis continue to grow in popularity due to the inherent digital nature of these sequences. DSP methods have demonstrated early success for detection of coding regions in a gene. Recently, these methods are being used to establish DNA gene similarity. We present the inter-coefficient difference (ICD) transformation, a novel extension of the discrete Fourier transformation, which can be applied to any DNA sequence. The ICD method is a mathematical, alignment-free DNA comparison method that generates a genetic signature for any DNA sequence that is used to generate relative measures of similarity among DNA sequences. We demonstrate our method on a set of insulin genes obtained from an evolutionarily wide range of species, and on a set of avian influenza viral sequences, which represents a set of highly similar sequences. We compare phylogenetic trees generated using our technique against trees generated using traditional alignment techniques for similarity and demonstrate that the ICD method produces a highly accurate tree without requiring an alignment prior to establishing sequence similarity.
Resumo:
PURPOSE: To determine how the ADC value of parotid glands is influenced by the choice of b-values. MATERIALS AND METHODS: In eight healthy volunteers, diffusion-weighted echo-planar imaging (DW-EPI) was performed on a 1.5 T system, with b-values (in seconds/mm2) of 0, 50, 100, 150, 200, 250, 300, 500, 750, and 1000. ADC values were calculated by two alternative methods (exponential vs. logarithmic fit) from five different sets of b-values: (A) all b-values; (B) b=0, 50, and 100; (C) b=0 and 750; (D) b=0, 500, and 1000; and (E) b=500, 750, and 1000. RESULTS: The mean ADC values for the different settings were (in 10(-3) mm2/second, exponential fit): (A) 0.732+/-0.019, (B) 2.074+/-0.084, (C) 0.947+/-0.020, (D) 0.890+/-0.023, and (E) 0.581+/-0.021. ADC values were significantly (P <0.001) different for all pairwise comparisons of settings (A-E) of b-values, except for A vs. D (P=0.172) and C vs. D (P=0.380). The ADC(B) was significantly higher than ADC(C) or ADC(D), which was significantly higher than ADC(E). ADC values from exponential vs. logarithmic fit (P=0.542), as well as left vs. right parotid gland (P=0.962), were indistinguishable. CONCLUSION: The ADC values calculated from low b-value settings were significantly higher than those calculated from high b-value settings. These results suggest that not only true diffusion but also perfusion and saliva flow may contribute to the ADC.