796 resultados para Empirical Algorithm Analysis
Resumo:
This dissertation analyses public opinion towards the welfare state across 29 European countries. Based on an interdisciplinary approach combining social psychological, sociological, and public opinion approaches to political opinion formation, it investigates how social position and shared beliefs shape perceived legitimacy of welfare institutions, and how social contexts impact on the processes of opinion formation. Drawing on social representations theory, as well as socialization and self-interest approaches, the dissertation analyses the role of social position in lay support for institutional solidarity. Normative beliefs-defined as preferred views regarding the organisation of social relations-mediate the effect of social position on welfare support. In addition, drawing on public opinion literature, the dissertation analyses opinion formation as a function of country-level structural (e.g., level of social spending, unemployment) and ideological factors (e.g., level of meritocracy). The dissertation comprises two theoretical and four empirical chapters. Three of the empirical chapters use data from the European Social Survey 2008. Using multilevel and typological approaches, the dissertation contributes to welfare attitude literature by showing that normative beliefs, such as distrust or egalitarianism, function as underlying mechanisms that link social position to policy attitudes (Chapter 3), and that characteristics of the national contexts influence the processes of political opinion formation (Chapters 3 and 4). Chapter 5 proposes and predicts a typology of the relationship between attitudes towards solidarity and attitudes towards control, reflecting the two central domains of government intervention. Finally, Chapter 6 examines welfare support in the realm of action and social protest, using data from a survey on Spanish Indigados activists. The findings of this dissertation inform contemporary debates about welfare state legitimacy and retrenchment. - Cette thèse avait pour but d'analyser l'opinion publique envers l'Etat social dans 29 pays européens. Basée sur une approche interdisciplinaire qui combine des perspectives psycho-sociales, sociologiques et d'opinion publique sur la formation d'opinion politique, la thèse étudie comment la position sociale et les croyances partagées façonnent la légitimité perçue des institutions de l'Etat social, et comment les contextes sociaux influencent les processus de formation d'opinion. Basée sur la théorie des représentations sociales, ainsi qu'une approche de socialisation et d'intérêt propre, cette thèse analyse le rôle des positions sociales dans le soutien envers la solidarité institutionnelle. Les croyances normatives-définies comme les visions préférées de l'organisation des rapports sociaux-médiatisent l'effet de la position sociale sur le soutien pour l'Etat social. De plus, s'inspirant de la littérature sur l'opinion publique, la thèse analyse la formation d'opinion en fonction des facteurs structurels (ex. le taux de dépenses sociales, le chômage) et idéologiques (ex. le degré de méritocratie). Cette thèse est composée de deux chapitres théoriques et quatre chapitres empiriques. Trois chapitres empiriques utilisent des données provenant de l'enquête European Social Survey 2008. Appliquant des approches multi-niveux et typoloqiques, la thèse contribue à la littérature sur les attitudes envers l'Etat social en montrant que les croyances normatives, telles que la méfiance ou l'égalitarisme, fonctionnent comme des mécanismes sous-jacents qui relient la position sociale aux attitudes politiques (Chapitre 3), et que les caractéristiques des contextes nationaux influencent les processus de formation d'opinion politique (Chapitres 3 et 4). Le chapitre 5 propose et prédit une typologie sur le rapport entre les attitudes envers la solidarité et celles envers le contrôle, renvoyant à deux domaines centraux de régulation étatique. Enfin, le chapitre 6 examine le soutien à l'Etat social dans le domaine de l'action protestataire, utilisant des données d'une enquête menée auprès des militants espagnols du mouvement des Indignés. Les résultats de cette thèse apportent des éléments qui éclairent les débats contemporains sur la légitimité de l'Etat social et son démantèlement.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
Due to the existence of free software and pedagogical guides, the use of data envelopment analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run themselves their own efficiency analysis. Within DEA, several alternative models allow for an environment adjustment. Five alternative models, each of them easily accessible to and achievable by practitioners and decision makers, are performed using the empirical case of the 90 primary schools of the State of Geneva, Switzerland. As the State of Geneva practices an upstream positive discrimination policy towards schools, this empirical case is particularly appropriate for an environment adjustment. The alternative of the majority of DEA models deliver divergent results. It is a matter of concern for applied researchers and a matter of confusion for practitioners and decision makers. From a political standpoint, these diverging results could lead to potentially opposite decisions. Grâce à l'existence de logiciels en libre accès et de guides pédagogiques, la méthode data envelopment analysis (DEA) s'est démocratisée ces dernières années. Aujourd'hui, il n'est pas rare que les décideurs avec peu ou pas de connaissances en recherche opérationnelle réalisent eux-mêmes leur propre analyse d'efficience. A l'intérieur de la méthode DEA, plusieurs modèles permettent de tenir compte des conditions plus ou moins favorables de l'environnement. Cinq de ces modèles, facilement accessibles et applicables par les décideurs, sont utilisés pour mesurer l'efficience des 90 écoles primaires du canton de Genève, Suisse. Le canton de Genève pratiquant une politique de discrimination positive envers les écoles défavorisées, ce cas pratique est particulièrement adapté pour un ajustement à l'environnement. La majorité des modèles DEA génèrent des résultats divergents. Ce constat est préoccupant pour les chercheurs appliqués et perturbant pour les décideurs. D'un point de vue politique, ces résultats divergents conduisent à des prises de décision différentes selon le modèle sur lequel elles sont fondées.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
OBJECTIVE: To evaluate an automated seizure detection (ASD) algorithm in EEGs with periodic and other challenging patterns. METHODS: Selected EEGs recorded in patients over 1year old were classified into four groups: A. Periodic lateralized epileptiform discharges (PLEDs) with intermixed electrical seizures. B. PLEDs without seizures. C. Electrical seizures and no PLEDs. D. No PLEDs or seizures. Recordings were analyzed by the Persyst P12 software, and compared to the raw EEG, interpreted by two experienced neurophysiologists; Positive percent agreement (PPA) and false-positive rates/hour (FPR) were calculated. RESULTS: We assessed 98 recordings (Group A=21 patients; B=29, C=17, D=31). Total duration was 82.7h (median: 1h); containing 268 seizures. The software detected 204 (=76.1%) seizures; all ictal events were captured in 29/38 (76.3%) patients; in only in 3 (7.7%) no seizures were detected. Median PPA was 100% (range 0-100; interquartile range 50-100), and the median FPR 0/h (range 0-75.8; interquartile range 0-4.5); however, lower performances were seen in the groups containing periodic discharges. CONCLUSION: This analysis provides data regarding the yield of the ASD in a particularly difficult subset of EEG recordings, showing that periodic discharges may bias the results. SIGNIFICANCE: Ongoing refinements in this technique might enhance its utility and lead to a more extensive application.
Resumo:
Evidence exists that many natural facts are described better as a fractal. Although fractals are very useful for describing nature, it is also appropiate to review the concept of random fractal in finance. Due to the extraordinary importance of Brownian motion in physics, chemistry or biology, we will consider the generalization that supposes fractional Brownian motion introduced by Mandelbrot.The main goal of this work is to analyse the existence of long range dependence in instantaneous forward rates of different financial markets. Concretelly, we perform an empirical analysis on the Spanish, Mexican and U.S. interbanking interest rate. We work with three time series of daily data corresponding to 1 day operations from 28th March 1996 to 21st May 2002. From among all the existing tests on this matter we apply the methodology proposed in Taqqu, Teverovsky and Willinger (1995).
Resumo:
The primary goal of this project is to demonstrate the accuracy and utility of a freezing drizzle algorithm that can be implemented on roadway environmental sensing systems (ESSs). The types of problems related to the occurrence of freezing precipitation range from simple traffic delays to major accidents that involve fatalities. Freezing drizzle can also lead to economic impacts in communities with lost work hours, vehicular damage, and downed power lines. There are means for transportation agencies to perform preventive and reactive treatments to roadways, but freezing drizzle can be difficult to forecast accurately or even detect as weather radar and surface observation networks poorly observe these conditions. The detection of freezing precipitation is problematic and requires special instrumentation and analysis. The Federal Aviation Administration (FAA) development of aircraft anti-icing and deicing technologies has led to the development of a freezing drizzle algorithm that utilizes air temperature data and a specialized sensor capable of detecting ice accretion. However, at present, roadway ESSs are not capable of reporting freezing drizzle. This study investigates the use of the methods developed for the FAA and the National Weather Service (NWS) within a roadway environment to detect the occurrence of freezing drizzle using a combination of icing detection equipment and available ESS sensors. The work performed in this study incorporated the algorithm developed initially and further modified for work with the FAA for aircraft icing. The freezing drizzle algorithm developed for the FAA was applied using data from standard roadway ESSs. The work performed in this study lays the foundation for addressing the central question of interest to winter maintenance professionals as to whether it is possible to use roadside freezing precipitation detection (e.g., icing detection) sensors to determine the occurrence of pavement icing during freezing precipitation events and the rates at which this occurs.
Resumo:
Performance-related pay within public organizations is continuing to spread. Although it can help to strengthen an entrepreneurial spirit in civil servants, its implementation is marred by technical, financial, managerial and cultural problems. This article identifies an added problem, namely the contradiction that exists between a managerial discourse that emphasizes the team and collective performance, on the one hand, and the use of appraisal and reward tools that are above all individual, on the other. Based on an empirical survey carried out within Swiss public organizations, the analysis shows that the team is currently rarely taken into account and singles out the principal routes towards an integrated system for the management and rewarding of civil servants.
Resumo:
One major methodological problem in analysis of sequence data is the determination of costs from which distances between sequences are derived. Although this problem is currently not optimally dealt with in the social sciences, it has some similarity with problems that have been solved in bioinformatics for three decades. In this article, the authors propose an optimization of substitution and deletion/insertion costs based on computational methods. The authors provide an empirical way of determining costs for cases, frequent in the social sciences, in which theory does not clearly promote one cost scheme over another. Using three distinct data sets, the authors tested the distances and cluster solutions produced by the new cost scheme in comparison with solutions based on cost schemes associated with other research strategies. The proposed method performs well compared with other cost-setting strategies, while it alleviates the justification problem of cost schemes.
Resumo:
The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the desired level of accuracy. The primary objective of this research was to develop a laboratory testing program utilizing the Iowa DOT servo-hydraulic machine system for evaluating typical Iowa unbound materials and to establish a database of input values for MEPDG analysis. This was achieved by carrying out a detailed laboratory testing program designed in accordance with the AASHTO T307 resilient modulus test protocol using common Iowa unbound materials. The program included laboratory tests to characterize basic physical properties of the unbound materials, specimen preparation and repeated load triaxial tests to determine the resilient modulus. The MEPDG resilient modulus input parameter library for Iowa typical unbound pavement materials was established from the repeated load triaxial MR test results. This library includes the non-linear, stress-dependent resilient modulus model coefficients values for level 1 analysis, the unbound material properties values correlated to resilient modulus for level 2 analysis, and the typical resilient modulus values for level 3 analysis. The resilient modulus input parameters library can be utilized when designing low volume roads in the absence of any basic soil testing. Based on the results of this study, the use of level 2 analysis for MEPDG resilient modulus input is recommended since the repeated load triaxial test for level 1 analysis is complicated, time consuming, expensive, and requires sophisticated equipment and skilled operators.
Resumo:
The objective of this study is to systematically evaluate the Iowa Department of Transportation’s (DOT’s) existing Pavement Management Information System (PMIS) with respect to the input information required for Mechanistic-Empirical Pavement Design Guide (MEPDG) rehabilitation analysis and design. To accomplish this objective, all of available PMIS data for interstate and primary roads in Iowa were retrieved from the Iowa DOT PMIS. The retrieved data were evaluated with respect to the input requirements and outputs for the latest version of the MEPDG software (version 1.0). The input parameters that are required for MEPDG HMA rehabilitation design, but currently unavailable in the Iowa DOT PMIS were identified. The differences in the specific measurement metrics used and their units for some of the pavement performance measures between the Iowa DOT PMIS and MEPDG were identified and discussed. Based on the results of this study, it is recommended that the Iowa DOT PMIS should be updated, if possible, to include the identified parameters that are currently unavailable, but are required for MEPDG rehabilitation design. Similarly, the measurement units of distress survey results in the Iowa DOT PMIS should be revised to correspond to those of MEPDG performance predictions. *******************Large File**************************
Resumo:
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accuracy or correspondence exists between predicted and monitored performance for Iowa conditions. A comprehensive literature review was conducted to identify the MEPDG input parameters and the MEPDG verification/calibration process. Sensitivities of MEPDG input parameters to predictions were studied using different versions of the MEPDG software. Based on literature review and sensitivity analysis, a detailed verification procedure was developed. A total of sixteen different types of pavement sections across Iowa, not used for national calibration in NCHRP 1-47A, were selected. A database of MEPDG inputs and the actual pavement performance measures for the selected pavement sites were prepared for verification. The accuracy of the MEPDG performance models for Iowa conditions was statistically evaluated. The verification testing showed promising results in terms of MEPDG’s performance prediction accuracy for Iowa conditions. Recalibrating the MEPDG performance models for Iowa conditions is recommended to improve the accuracy of predictions. ****************** Large File**************************
Resumo:
Introduction: Survival of children born prematurely or with very low birth weight has increased dramatically, but the long term developmental outcome remains unknown. Many children have deficits in cognitive capacities, in particular involving executive domains and those disabilities are likely to involve a central nervous system deficit. To understand their neurostructural origin, we use DTI. Structurally segregated and functionally regions of the cerebral cortex are interconnected by a dense network of axonal pathways. We noninvasively map these pathways across cortical hemispheres and construct normalized structural connection matrices derived from DTI MR tractography. Group comparisons of brain connectivity reveal significant changes in fiber density in case of children with poor intrauterine grown and extremely premature children (gestational age<28 weeks at birth) compared to control subjects. This changes suggest a link between cortico-axonal pathways and the central nervous system deficit. Methods: Sixty premature born infants (5-6 years old) were scanned on clinical 3T scanner (Magnetom Trio, Siemens Medical Solutions, Erlangen, Germany) at two hospitals (HUG, Geneva and CHUV, Lausanne). For each subject, T1-weighted MPRAGE images (TR/TE=2500/2.91,TI=1100, resolution=1x1x1mm, matrix=256x154) and DTI images (30 directions, TR/TE=10200/107, in-plane resolution=1.8x1.8x2mm, 64 axial, matrix=112x112) were acquired. Parent(s) provided written consent on prior ethical board approval. The extraction of the Whole Brain Structural Connectivity Matrix was performed following (Cammoun, 2009 and Hagmann, 2008). The MPARGE images were registered using an affine registration to the non-weighted-DTI and WM-GM segmentation performed on it. In order to have equal anatomical localization among subjects, 66 cortical regions with anatomical landmarks were created using the curvature information, i.e. sulcus and gyrus (Cammoun et al, 2007; Fischl et al, 2004; Desikan et al, 2006) with freesurfer software (http://surfer.nmr.mgh.harvard.edu/). Tractography was performed in WM using an algorithm especially designed for DTI/DSI data (Hagmann et al., 2007) and both information were then combined in a matrix. Each row and column of the matrix corresponds to a particular ROI. Each cell of index (i,j) represents the fiber density of the bundle connecting the ROIs i and j. Subdividing each cortical region, we obtained 4 Connectivity Matrices of different resolution (33, 66, 125 and 250 ROI/hemisphere) for each subject . Subjects were sorted in 3 different groups, namely (1) control, (2) Intrauterine Growth Restriction (IUGR), (3) Extreme Prematurity (EP), depending on their gestational age, weight and percentile-weight score at birth. Group-to-group comparisons were performed between groups (1)-(2) and (1)-(3). The mean age at examination of the three groups were similar. Results: Quantitative analysis were performed between groups to determine fibers density differences. For each group, a mean connectivity matrix with 33ROI/hemisphere resolution was computed. On the other hand, for all matrix resolutions (33,66,125,250 ROI/hemisphere), the number of bundles were computed and averaged. As seen in figure 1, EP and IUGR subjects present an overall reduction of fibers density in both interhemispherical and intrahemispherical connections. This is given quantitatively in table 1. IUGR subjects presents a higher percentage of missing fiber bundles than EP when compared to control subjects (~16% against 11%). When comparing both groups to control subjects, for the EP subjects, the occipito-parietal regions seem less interhemispherically connected whilst the intrahemispherical networks present lack of fiber density in the lymbic system. Children born with IUGR, have similar reductions in interhemispherical connections than the EP. However, the cuneus and precuneus connections with the precentral and paracentral lobe are even lower than in the case of the EP. For the intrahemispherical connections the IUGR group preset a loss of fiber density between the deep gray matter structures (striatum) and the frontal and middlefrontal poles, connections typically involved in the control of executive functions. For the qualitative analysis, a t-test comparing number of bundles (p-value<0.05) gave some preliminary significant results (figure 2). Again, even if both IUGR and EP appear to have significantly less connections comparing to the control subjects, the IUGR cohort seems to present a higher lack of fiber density specially relying the cuneus, precuneus and parietal areas. In terms of fiber density, preliminary Wilcoxon tests seem to validate the hypothesis set by the previous analysis. Conclusions: The goal of this study was to determine the effect of extreme prematurity and poor intrauterine growth on neurostructural development at the age of 6 years-old. This data indicates that differences in connectivity may well be the basis for the neurostructural and neuropsychological deficit described in these populations in the absence of overt brain lesions (Inder TE, 2005; Borradori-Tolsa, 2004; Dubois, 2008). Indeed, we suggest that IUGR and prematurity leads to alteration of connectivity between brain structures, especially in occipito-parietal and frontal lobes for EP and frontal and middletemporal poles for IUGR. Overall, IUGR children have a higher loss of connectivity in the overall connectivity matrix than EP children. In both cases, the localized alteration of connectivity suggests a direct link between cortico-axonal pathways and the central nervous system deficit. Our next step is to link these connectivity alterations to the performance in executive function tests.