996 resultados para objective variables
Resumo:
The objective of this paper is to estimate the impact of residential job accessibility on female employment probability in the metropolitan areas of Barcelona and Madrid. Following a “spatial mismatch” framework, we estimate a female employment probability equation where variables controlling for personal characteristics, residential segregation and employment potential on public transport network are included. Data used come from Microcensus 2001 of INE (National Institute of Statistics). The research focuses on the treatment of endogeneity problems and the measurement of accessibility variables. Our results show that low job accessibility in public transport negatively affects employment probability. The intensity of this effect tends to decrease with individual’s educational attainment. A higher degree of residential segregation also reduces job probability in a significant way..
Resumo:
OBJECTIVE: Study of the uptake of new medical technologies provides useful information on the transfer of published evidence into usual practice. We conducted an audit of selected hospitals in three countries (Canada, France, and Switzerland) to identify clinical predictors of low-molecular-weight (LMW) heparin use and outpatient treatment, and to compare the pace of uptake of these new therapeutic approaches across hospitals. DESIGN: Historical review of medical records. SETTING AND PARTICIPANTS: We reviewed the medical records of 3043 patients diagnosed with deep vein thrombosis (DVT) in five Canadian, two French, and two Swiss teaching hospitals from 1994 to 1998. Measures. We explored independent clinical variables associated with LMW heparin use and outpatient treatment, and determined crude and adjusted rates of LMW heparin use and outpatient treatment across hospitals. RESULTS: For the years studied, the overall rates of LMW heparin use and outpatient treatment in the study sample were 34.1 and 15.8%, respectively, with higher rates of use in later years. Many comorbidities were negatively associated with outpatient treatment, and risk-adjusted rates of use of these new approaches varied significantly across hospitals. CONCLUSION: There has been a relatively rapid uptake of LMW heparins and outpatient treatment for DVT in their early years of availability, but the pace of uptake has varied considerably across hospitals and countries.
Resumo:
The objective of this paper is to analyse to what extent the use of cross-section data will distort the estimated elasticities for car ownership demand when the observed variables do not correspond to a state equilibrium for some individuals in the sample. Our proposal consists of approximating the equilibrium values of the observed variables by constructing a pseudo-panel data set which entails averaging individuals observed at different points of time into cohorts. The results show that individual and aggregate data lead to almost the same value for income elasticity, whereas with respect to working adult elasticity the similarity is less pronounced.
Resumo:
Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.
Resumo:
OBJECTIVE:: To examine the accuracy of brain multimodal monitoring-consisting of intracranial pressure, brain tissue PO2, and cerebral microdialysis-in detecting cerebral hypoperfusion in patients with severe traumatic brain injury. DESIGN:: Prospective single-center study. PATIENTS:: Patients with severe traumatic brain injury. SETTING:: Medico-surgical ICU, university hospital. INTERVENTION:: Intracranial pressure, brain tissue PO2, and cerebral microdialysis monitoring (right frontal lobe, apparently normal tissue) combined with cerebral blood flow measurements using perfusion CT. MEASUREMENTS AND MAIN RESULTS:: Cerebral blood flow was measured using perfusion CT in tissue area around intracranial monitoring (regional cerebral blood flow) and in bilateral supra-ventricular brain areas (global cerebral blood flow) and was matched to cerebral physiologic variables. The accuracy of intracranial monitoring to predict cerebral hypoperfusion (defined as an oligemic regional cerebral blood flow < 35 mL/100 g/min) was examined using area under the receiver-operating characteristic curves. Thirty perfusion CT scans (median, 27 hr [interquartile range, 20-45] after traumatic brain injury) were performed on 27 patients (age, 39 yr [24-54 yr]; Glasgow Coma Scale, 7 [6-8]; 24/27 [89%] with diffuse injury). Regional cerebral blood flow correlated significantly with global cerebral blood flow (Pearson r = 0.70, p < 0.01). Compared with normal regional cerebral blood flow (n = 16), low regional cerebral blood flow (n = 14) measurements had a higher proportion of samples with intracranial pressure more than 20 mm Hg (13% vs 30%), brain tissue PO2 less than 20 mm Hg (9% vs 20%), cerebral microdialysis glucose less than 1 mmol/L (22% vs 57%), and lactate/pyruvate ratio more than 40 (4% vs 14%; all p < 0.05). Compared with intracranial pressure monitoring alone (area under the receiver-operating characteristic curve, 0.74 [95% CI, 0.61-0.87]), monitoring intracranial pressure + brain tissue PO2 (area under the receiver-operating characteristic curve, 0.84 [0.74-0.93]) or intracranial pressure + brain tissue PO2+ cerebral microdialysis (area under the receiver-operating characteristic curve, 0.88 [0.79-0.96]) was significantly more accurate in predicting low regional cerebral blood flow (both p < 0.05). CONCLUSION:: Brain multimodal monitoring-including intracranial pressure, brain tissue PO2, and cerebral microdialysis-is more accurate than intracranial pressure monitoring alone in detecting cerebral hypoperfusion at the bedside in patients with severe traumatic brain injury and predominantly diffuse injury.
Resumo:
This paper assesses the impact of official central bank interventions (CBIs) on exchange rate returns, their volatility and bilateral correlations. By exploiting the recent publication of intervention data by the Bank of England, this study is able to investigate fficial interventions by a total number of four central banks, while the previous studies have been limited to three (the Federal Reserve, Bundesbank and Bank of Japan). The results of the existing literature are reappraised and refined. In particular, unilateral CBI is found to be more successful than coordinated CBI. The likely implications of these findings are then discussed.
Resumo:
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear - quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of pol icy performance is a linear combinat ion of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Resumo:
Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. In this paper, we draw on methods from the Bayesian clustering literature to develop an econometric methodology which: i) finds groups of variables which have the same number of breaks; and ii) determines the nature of the break process within each group. We present an application involving a five-variate steady-state VAR.
Resumo:
The project aims to achieve two objectives. First, we are analysing the labour market implications of the assumption that firms cannot pay similarly qualified employees differently according to when they joined the firm. For example, if the general situation for workers improves, a firm that seeks to hire new workers may feel it has to pay more to new hires. However, if the firm must pay the same wage to new hires and incumbents due to equal treatment, it would either have to raise the wage of the incumbents, or offer new workers a lower wage than the firm would do otherwise. This is very different from the standard assumption in economic analysis that firms are free to treat newly hired workers independently of existing hires. Second, we will use detailed data on individual wages to try to gauge whether (and to what extent) equity is a feature of actual labour markets. To investigate this, we are using two matched employer-employee panel datasets, one from Portugal and the other from Brazil. These unique datasets provide objective records on millions of workers and their firms over a long period of time, so that we can identify which firms employ which workers at each time. The datasets also include a large number of firm and worker variables.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
BACKGROUND/OBJECTIVES: There is little objective information regarding nutrition transition in African countries. We assessed trends in nutrition patterns in the Seychelles between 1989 and 2011. SUBJECTS/METHODS: Population-based samples were obtained in 1989, 1994 and 2011 and participants aged 25-44 are considered in this study (n=493, 599 and 471, respectively). Similar, although not identical, food frequency questionnaires (FFQs) were used in each survey and the variables were collapsed into homogenous categories for the purpose of this study. RESULTS: Between 1989 and 2011, consumption frequency of fish (5+/week) decreased from 93 to 74%, whereas the following increased: meat (5+/week) 25 to 51%, fruits (1+/week) 48 to 94%, salty snacks (1+/week) 22 to 64% and sweet snacks (1+/week) 38 to 67% (P<0.001 for all). Consumption frequency decreased for home-brewed alcoholic drinks (1+/week) 16 to 1%, but increased for wine (1+/week) 5 to 33% (both P<0.001). Between 2004 and 2011, consumption frequency decreased for rice (2/day) 62 to 57% and tea (1+/day) 72 to 68%, increased for poultry (1+/week) 86 to 96% (all P<0.01), and did not change for vegetables (70.3 to 69.8%, P=0.65). CONCLUSIONS: Seychelles is experiencing nutrition transition characterized by a decreased consumption frequency of traditional staple foods (fish, polished rice), beverages (tea) and of inexpensive home brews, and increased consumption frequency of meat, poultry and snacks. Food patterns also became more varied along with a broader availability of products in the 22-year interval. The health impact of these changes should be further studied.
Resumo:
The aim of this work is to evaluate the capabilities and limitations of chemometric methods and other mathematical treatments applied on spectroscopic data and more specifically on paint samples. The uniqueness of the spectroscopic data comes from the fact that they are multivariate - a few thousands variables - and highly correlated. Statistical methods are used to study and discriminate samples. A collection of 34 red paint samples was measured by Infrared and Raman spectroscopy. Data pretreatment and variable selection demonstrated that the use of Standard Normal Variate (SNV), together with removal of the noisy variables by a selection of the wavelengths from 650 to 1830 cm−1 and 2730-3600 cm−1, provided the optimal results for infrared analysis. Principal component analysis (PCA) and hierarchical clusters analysis (HCA) were then used as exploratory techniques to provide evidence of structure in the data, cluster, or detect outliers. With the FTIR spectra, the Principal Components (PCs) correspond to binder types and the presence/absence of calcium carbonate. 83% of the total variance is explained by the four first PCs. As for the Raman spectra, we observe six different clusters corresponding to the different pigment compositions when plotting the first two PCs, which account for 37% and 20% respectively of the total variance. In conclusion, the use of chemometrics for the forensic analysis of paints provides a valuable tool for objective decision-making, a reduction of the possible classification errors, and a better efficiency, having robust results with time saving data treatments.
Resumo:
STUDY OBJECTIVE: To determine the efficacy of melatonin on sleep problems in children with autistic spectrum disorder (ASD) and fragile X syndrome (FXS). METHODS: A 4-week, randomized, double blind, placebo-controlled, crossover design was conducted following a 1-week baseline period. Either melatonin, 3 mg, or placebo was given to participants for 2 weeks and then alternated for another 2 weeks. Sleep variables, including sleep duration, sleep-onset time, sleep-onset latency time, and the number of night awakenings, were recorded using an Actiwatch and from sleep diaries completed by parents. All participants had been thoroughly assessed for ASD and also had DNA testing for the diagnosis of FXS. RESULTS: Data were successfully obtained from the 12 of 18 subjects who completed the study (11 males, age range 2 to 15.25 years, mean 5.47, SD 3.6). Five participants met diagnostic criteria for ASD, 3 for FXS alone, 3 for FXS and ASD, and 1 for fragile X premutation. Eight out of 12 had melatonin first. The conclusions from a nonparametric repeated-measures technique indicate that mean night sleep duration was longer on melatonin than placebo by 21 minutes (p = .02), mean sleep-onset latency was shorter by 28 minutes (p = .0001), and mean sleep-onset time was earlier by 42 minutes (p = .02). CONCLUSION: The results of this study support the efficacy and tolerability of melatonin treatment for sleep problems in children with ASD and FXS.