772 resultados para C21 - Cross-Sectional Models
Resumo:
Executive Summary Emergency health is a critical component of Australia’s health system and emergency departments (EDs) are increasingly congested from growing demand and blocked access to inpatient beds. The Emergency Health Services Queensland (EHSQ) study aims to identify the factors driving increased demand for emergency health and to evaluate strategies which may safely reduce the future demand growth. This monograph addresses the perspectives of users of both ambulance services and EDs. The research reported here aimed to identify the perspectives of users of emergency health services, both ambulance services and public hospital Emergency Departments and to identify the factors that they took into consideration when exercising their choice of location for acute health care. A cross-sectional survey design was used involving a survey of patients or their carers presenting to the EDs of a stratified sample of eight hospitals. A specific purpose questionnaire was developed based on a novel theoretical model which had been derived from analysis of the literature (Monograph 1). Two survey versions were developed: one for adult patients (self-complete); and one for children (to be completed by parents/guardians). The questionnaires measured perceptions of social support, health status, illness severity, self-efficacy; beliefs and attitudes towards ED and ambulance services; reasons for using these services, and actions taken prior to the service request. The survey was conducted at a stratified sample of eight hospitals representing major cities (four), inner regional (two) and outer regional and remote (two). Due to practical limitations, data were collected for ambulance and ED users within hospital EDs, while patients were waiting for or under treatment. A sample size quota was determined for each ED based on their 2009/10 presentation volumes. The data collection was conducted by four members of the research team and a group of eight interviewers between March and May 2011 (corresponding to autumn season). Of the total of 1608 patients in all eight emergency departments the interviewers were able to approach 1361 (85%) patients and seek their consent to participate in the study. In total, 911 valid surveys were available for analysis (response rate= 67%). These studies demonstrate that patients elected to attend hospital EDs in a considered fashion after weighing up alternatives and there is no evidence of deliberate or ill-informed misuse. • Patients attending ED have high levels of social support and self-efficacy that speak to the considered and purposeful nature of the exercise of choice. • About one third of patients have new conditions while two thirds have chronic illnesses • More than half the attendees (53.1%) had consulted a healthcare professional prior to making the decision. • The decision to seek urgent care at an ED was mostly constructed around the patient’s perception of the urgency and severity of their illness, reinforced by a strong perception that the hospital ED was the correct location for them (better specialised staff, better care for my condition, other options not as suitable). • 33% of the respondent held private hospital insurance but nevertheless attended a public hospital ED. Similarly patients exercised considered and rational judgements in their choice to seek help from the ambulance service. • The decision to call for ambulance assistance was based on a strong perception about the severity of the illness (too severe to use other means of transport) and that other options were not considered appropriate. • The decision also appeared influenced by a perception that the ambulance provided appropriate access to the ED which was considered most appropriate for their particular condition (too severe to go elsewhere, all facilities in one spot, better specialised and better care). • In 43.8% of cases a health care professional advised use of the ambulance. • Only a small number of people perceived that ambulance should be freely available regardless of severity or appropriateness. These findings confirm a growing understanding that the choice of professional emergency health care services is not made lightly but rather made by reasonable people exercising a judgement which is influenced by public awareness of the risks of acute health and which is most often informed by health professionals. It is also made on the basis of a rational weighing up of alternatives and a deliberate and considered choice to seek assistance from a service which the patient perceived was most appropriate to their needs at that time. These findings add weight to dispensing with public perceptions that ED and ambulance congestion is a result of inappropriate choice by patients. The challenge for health services is to better understand the patient’s needs and to design and validate services that meet those needs. The failure of our health system to do so should not be grounds for blaming the patient, claiming inappropriate patient choices.
Resumo:
The cross-sectional indentation method is extended to evaluate the interfacial adhesion between brittle coating and ductile substrate. The experimental results on electroplated chromium coating/steel substrate show that the interfacial separation occurs due to the edge chipping of brittle coating. The corresponding models are established to elucidate interfacial separation processes. This work further highlights the advantages and potential of this novel indentation method.
Resumo:
The cross-sectional indentation method is extended to evaluate the interfacial adhesion between brittle coating and ductile substrate. The experimental results on electroplated chromium coating/steel substrate show that the interfacial separation occurs due to the edge chipping of brittle coating. The corresponding models are established to elucidate interfacial separation processes. This work further highlights the advantages and potential of this novel indentation method
Resumo:
Nasal congestion is one of the most troublesome symptoms of many upper airways diseases. We characterized the effect of selective α2c-adrenergic agonists in animal models of nasal congestion. In porcine mucosa tissue, compound A and compound B contracted nasal veins with only modest effects on arteries. In in vivo experiments, we examined the nasal decongestant dose-response characteristics, pharmacokinetic/pharmacodynamic relationship, duration of action, potential development of tolerance, and topical efficacy of α2c-adrenergic agonists. Acoustic rhinometry was used to determine nasal cavity dimensions following intranasal compound 48/80 (1%, 75 µl). In feline experiments, compound 48/80 decreased nasal cavity volume and minimum cross-sectional areas by 77% and 40%, respectively. Oral administration of compound A (0.1-3.0 mg/kg), compound B (0.3-5.0 mg/kg), and d-pseudoephedrine (0.3 and 1.0 mg/kg) produced dose-dependent decongestion. Unlike d-pseudoephedrine, compounds A and B did not alter systolic blood pressure. The plasma exposure of compound A to produce a robust decongestion (EC(80)) was 500 nM, which related well to the duration of action of approximately 4.0 hours. No tolerance to the decongestant effect of compound A (1.0 mg/kg p.o.) was observed. To study the topical efficacies of compounds A and B, the drugs were given topically 30 minutes after compound 48/80 (a therapeutic paradigm) where both agents reversed nasal congestion. Finally, nasal-decongestive activity was confirmed in the dog. We demonstrate that α2c-adrenergic agonists behave as nasal decongestants without cardiovascular actions in animal models of upper airway congestion.
Resumo:
BACKGROUND: Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. METHODS: This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. RESULTS: Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. CONCLUSIONS: The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
Abstract. Three influential theoretical models of OCD focus upon the cognitive factors of inflated responsibility (Salkovskis, 1985), thought-action fusion (Rachman, 1993) and meta-cognitive beliefs (Wells and Matthews, 1994). Little is known about the relevance of these models in adolescents or about the nature of any direct or mediating relationships between these variables and OCD symptoms. This was a cross-sectional correlational design with 223 non-clinical adolescents aged 13 to 16 years. All participants completed questionnaires measuring inflated responsibility, thought-action fusion, meta-cognitive beliefs and obsessive-compulsive symptoms. Inflated responsibility, thought-action fusion and metacognitive beliefs were significantly associated with higher levels of obsessive-compulsive symptoms. These variables accounted for 35% of the variance in obsessive-compulsive symptoms, with inflated responsibility and meta-cognitive beliefs both emerging as significant independent predictors. Inflated responsibility completely mediated the effect of thoughtaction fusion and partially mediated the effect of meta-cognitive beliefs. Support for the downward extension of cognitive models to understanding OCD in a younger population was shown. Findings suggest that inflated responsibility and meta-cognitive beliefs may be particularly important cognitive concepts in OCD. Methodological limitations must be borne in mind and future research is needed to replicate and extend findings in clinical samples. Keywords: Obsessive compulsive disorder, adolescents, cognitive models.
Resumo:
Cognitive models of obsessive compulsive disorder (OCD) have been influential in understanding and treating the disorder in adults. Cognitive models may also be applicable to children and adolescents and would have important implications for treatment. The aim of this systematic review was to evaluate research that examined the applicability of the cognitive model of OCD to children and adolescents. Inclusion criteria were set broadly but most studies identified included data regarding responsibility appraisals, thought-action fusion or meta-cognitive models of OCD in children or adolescents. Eleven studies were identified in a systematic literature search. Seven studies were with non clinical samples, and 10 studies were cross-sectional. Only one study did not support cognitive models of OCD in children and adolescents and this was with a clinical sample and was the only experimental study. Overall, the results strongly supported the applicability of cognitive models of OCD to children and young people. There were, however, clear gaps in the literature. Future research should include experimental studies, clinical groups, and should test which of the different models provide more explanatory power.
Resumo:
This paper confronts the Capital Asset Pricing Model - CAPM - and the 3-Factor Fama-French - FF - model using both Brazilian and US stock market data for the same Sample period (1999-2007). The US data will serve only as a benchmark for comparative purposes. We use two competing econometric methods, the Generalized Method of Moments (GMM) by (Hansen, 1982) and the Iterative Nonlinear Seemingly Unrelated Regression Estimation (ITNLSUR) by Burmeister and McElroy (1988). Both methods nest other options based on the procedure by Fama-MacBeth (1973). The estimations show that the FF model fits the Brazilian data better than CAPM, however it is imprecise compared with the US analog. We argue that this is a consequence of an absence of clear-cut anomalies in Brazilian data, specially those related to firm size. The tests on the efficiency of the models - nullity of intercepts and fitting of the cross-sectional regressions - presented mixed conclusions. The tests on intercept failed to rejected the CAPM when Brazilian value-premium-wise portfolios were used, contrasting with US data, a very well documented conclusion. The ITNLSUR has estimated an economically reasonable and statistically significant market risk premium for Brazil around 6.5% per year without resorting to any particular data set aggregation. However, we could not find the same for the US data during identical period or even using a larger data set. Este estudo procura contribuir com a literatura empírica brasileira de modelos de apreçamento de ativos. Dois dos principais modelos de apreçamento são Infrontados, os modelos Capital Asset Pricing Model (CAPM)e de 3 fatores de Fama-French. São aplicadas ferramentas econométricas pouco exploradas na literatura nacional na estimação de equações de apreçamento: os métodos de GMM e ITNLSUR. Comparam-se as estimativas com as obtidas de dados americanos para o mesmo período e conclui-se que no Brasil o sucesso do modelo de Fama e French é limitado. Como subproduto da análise, (i) testa-se a presença das chamadas anomalias nos retornos, e (ii) calcula-se o prêmio de risco implícito nos retornos das ações. Os dados revelam a presença de um prêmio de valor, porém não de um prêmio de tamanho. Utilizando o método de ITNLSUR, o prêmio de risco de mercado é positivo e significativo, ao redor de 6,5% ao ano.
Resumo:
Multi-factor models constitute a useful tool to explain cross-sectional covariance in equities returns. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide interval forecasts.
Resumo:
Multi-factor models constitute a use fui tool to explain cross-sectional covariance in equities retums. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide intervaI forecasts.
Resumo:
The aim of analogue model experiments in geology is to simulate structures in nature under specific imposed boundary conditions using materials whose rheological properties are similar to those of rocks in nature. In the late 1980s, X-ray computed tomography (CT) was first applied to the analysis of such models. In early studies only a limited number of cross-sectional slices could be recorded because of the time involved in CT data acquisition, the long cooling periods for the X-ray source and computational capacity. Technological improvements presently allow an almost unlimited number of closely spaced serial cross-sections to be acquired and calculated. Computer visualization software allows a full 3D analysis of every recorded stage. Such analyses are especially valuable when trying to understand complex geological structures, commonly with lateral changes in 3D geometry. Periodic acquisition of volumetric data sets in the course of the experiment makes it possible to carry out a 4D analysis of the model, i.e. 3D analysis through time. Examples are shown of 4D analysis of analogue models that tested the influence of lateral rheological changes on the structures obtained in contractional and extensional settings.
Resumo:
Liquidity is an important attribute of an asset that investors would like to take into consideration when making investment decisions. However, the previous empirical evidence whether liquidity is a determinant of stock return is not unanimous. This dissertation provides a very comprehensive study about the role of liquidity in asset pricing using the Fama-French (1993) three-factor and Kraus and Litzenberger (1976) three-moment CAPM as models for risk adjustment. The relationship between liquidity and well-known determinants of stock returns such as size and book-to-market are also investigated. This study examines the liquidity and asset pricing issues for both intertemporal as well as cross-sectional data. ^ The results indicate an existence of a liquidity premium, i.e., less liquid stocks would demand higher rate of return than more liquid stocks. More specifically, a drop of 1 percent in liquidity is associated with a higher rate of return of about 2 to 3 basis points per month. Further investigation reveals that neither the Fama-French three-factor model nor the three-moment CAPM captures the liquidity premium. Finally, the results show that well-known determinants of stock return such as size and book-to-market do not serve as proxy for liquidity. ^ Overall, this dissertation shows that a liquidity premium exists in the stock market and that liquidity is a distinct effect, and is not influenced by the presence of non-market factors, market factors and other stock characteristics.^