862 resultados para Context-based


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanomotors are nanoscale devices capable of converting energy into movement and forces. Among them, self-propelled nanomotors offer considerable promise for developing new and novel bioanalytical and biosensing strategies based on the direct isolation of target biomolecules or changes in their movement in the presence of target analytes. The mainachievements of this project consists on the development of receptor-functionalized nanomotors that offer direct and rapid target detection, isolation and transport from raw biological samples without preparatory and washing steps. For example, microtube engines functionalized with aptamer, antibody, lectin and enzymes receptors were used for the direct isolation of analytes of biomedical interest, including proteins and whole cells, among others. A target protein was also isolated from a complex sample by using an antigen-functionalized microengine navigating into the reservoirs of a lab-on-a-chip device. The new nanomotorbased target biomarkers detection strategy not only offers highly sensitive, rapid, simple and low cost alternative for the isolation and transport of target molecules, but also represents a new dimension of analytical information based on motion. The recognition events can be easily visualized by optical microscope (without any sophisticated analytical instrument) to reveal the target presence and concentration. The use of artificial nanomachines has shown not only to be useful for (bio)recognition and (bio)transport but also for detection of environmental contamination and remediation. In this context, micromotors modified with superhydrophobic layer demonstrated that effectively interacted, captured, transported and removed oil droplets from oil contaminated samples. Finally, a unique micromotor-based strategy for water-quality testing, that mimics live-fish water-quality testing, based on changes in the propulsion behavior of artificial biocatalytic microswimmers in the presence of aquatic pollutants was also developed. The attractive features of the new micromachine-based target isolation and signal transduction protocols developed in this project offer numerous potential applications in biomedical diagnostics, environmental monitoring, and forensic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most relevant difficulties faced by first-year undergraduate students is to settle into the educational environment of universities. This paper presents a case study that proposes a computer-assisted collaborative experience designed to help students in their transition from high school to university. This is done by facilitating their first contact with the campus and its services, the university community, methodologies and activities. The experience combines individual and collaborative activities, conducted in and out of the classroom, structured following the Jigsaw Collaborative Learning Flow Pattern. A specific environment including portable technologies with network and computer applications has been developed to support and facilitate the orchestration of a flow of learning activities into a single integrated learning setting. The result is a Computer-Supported Collaborative Blended Learning scenario, which has been evaluated with first-year university students of the degrees of Software and Audiovisual Engineering within the subject Introduction to Information and Communications Technologies. The findings reveal that the scenario improves significantly students’ interest in their studies and their understanding about the campus and services provided. The environment is also an innovative approach to successfully support the heterogeneous activities conducted by both teachers and students during the scenario. This paper introduces the goals and context of the case study, describes how the technology was employed to conduct the learning scenario, the evaluation methods and the main results of the experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When applying a Collaborative Learning Flow Pattern (CLFP) to structure sequences of activities in real contexts, one of the tasks is to organize groups of students according to the constraints imposed by the pattern. Sometimes,unexpected events occurring at runtime force this pre-defined distribution to be changed. In such situations, an adjustment of the group structures to be adapted to the new context is needed. If the collaborative pattern is complex, this group redefinitionmight be difficult and time consuming to be carried out in real time. In this context, technology can help on notifying the teacher which incompatibilitiesbetween the actual context and the constraints imposed by the pattern. This chapter presents a flexible solution for supporting teachers in the group organization profiting from the intrinsic constraints defined by a CLFPs codified in IMS Learning Design. A prototype of a web-based tool for the TAPPS and Jigsaw CLFPs and the preliminary results of a controlled user study are alsopresented as a first step towards flexible technological systems to support grouping tasks in this context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Migration is considered a depression risk factor when associated with psychosocial adversity, but its impact on depression's clinical characteristics has not been specifically studied. We compared 85 migrants to 34 controls, examining depression's severity, symptomatology, comorbidity profile and clinical course. METHOD: A MINI interview modified to assess course characteristics was used to assign DSM-IV axis I diagnoses; medical files were used for Somatoform Disorders. Severity was assessed with the Montgomery-Asberg scale. Wherever possible, we adjusted comparisons for age and gender using logistic and linear regressions. RESULTS: Depression in migrants was characterized by higher comorbidity (mostly somatoform and anxiety disorders), higher severity, and a non-recurrent, chronic course. LIMITATIONS: Our sample comes from a single center, and should be replicated in other health care facilities and other countries. Somatoform disorder diagnoses were solely based on file-content. CONCLUSION: Depression in migrants presented as a complex, chronic clinical picture. Most of our migrant patients experienced significant psychosocial adversity before and after migration: beyond cultural issues, our results suggest that psychosocial adversity impacts on the clinical expression of depression. Our study also suggests that migration associated with psychosocial adversity might play a specific etiological role, resulting in a distinct clinical picture, questioning the DSM-IV unitarian model of depression. The chronic course might indicate a resistance to standard therapeutic regimen and hints at the necessity of developing specific treatment strategies, adapted to the individual patients and their specific context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: We evaluated the feasibility of biomarker development in the context of multicenter clinical trials. EXPERIMENTAL DESIGN: Formalin-fixed, paraffin-embedded (FFPE) tissue samples were collected from a prospective adjuvant colon cancer trial (PETACC3). DNA was isolated from tumor as well as normal tissue and used for analysis of microsatellite instability, KRAS and BRAF genotyping, UGT1A1 genotyping, and loss of heterozygosity of 18 q loci. Immunohistochemistry was used to test expression of TERT, SMAD4, p53, and TYMS. Messenger RNA was retrieved and tested for use in expression profiling experiments. RESULTS: Of the 3,278 patients entered in the study, FFPE blocks were obtained from 1,564 patients coming from 368 different centers in 31 countries. In over 95% of the samples, genomic DNA tests yielded a reliable result. Of the immmunohistochemical tests, p53 and SMAD4 staining did best with reliable results in over 85% of the cases. TERT was the most problematic test with 46% of failures, mostly due to insufficient tissue processing quality. Good quality mRNA was obtained, usable in expression profiling experiments. CONCLUSIONS: Prospective clinical trials can be used as framework for biomarker development using routinely processed FFPE tissues. Our results support the notion that as a rule, translational studies based on FFPE should be included in prospective clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first AO comprehensive pediatric long-bone fracture classification system has been proposed following a structured path of development and validation with experienced pediatric surgeons. A Web-based multicenter agreement study involving 70 surgeons in 15 clinics and 5 countries was conducted to assess the reliability and accuracy of this classification when used by a wide range of surgeons with various levels of experience. Training was provided at each clinic before the session. Using the Internet, participants could log in at any time and classify 275 supracondylar, radius, and tibia fractures at their own pace. The fracture diagnosis was made following the hierarchy of the classification system using both clinical terminology and codes. kappa coefficients for the single-surgeon diagnosis of epiphyseal, metaphyseal, or diaphyseal fracture type were 0.66, 0.80, and 0.91, respectively. Median accuracy estimates for each bone and type were all greater than 80%. Depending on their experience and specialization, surgeons greatly varied in their ability to classify fractures. Pediatric training and at least 2 years of experience were associated with significant improvement in reliability and accuracy. Kappa coefficients for diagnosis of specific child patterns were 0.51, 0.63, and 0.48 for epiphyseal, metaphyseal, and diaphyseal fractures, respectively. Identified reasons for coding discrepancies were related to different understandings of terminology and definitions, as well as poor quality radiographic images. Results supported some minor adjustments in the coding of fracture type and child patterns. This classification system received wide acceptance and support among the surgeons involved. As long as appropriate training could be performed, the system classification was reliable, especially among surgeons with a minimum of 2 years of clinical experience. We encourage broad-based consultation between surgeons' international societies and the use of this classification system in the context of clinical practice as well as prospectively for clinical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The delivery kinetics of growth factors has been suggested to play an important role in the regeneration of peripheral nerves following axotomy. In this context, we designed a nerve conduit (NC) with adjustable release kinetics of nerve growth factor (NGF). A multi-ply system was designed where NC consisting of a polyelectrolyte alginate/chitosan complex was coated with layers of poly(lactide-co-glycolide) (PLGA) to control the release of embedded NGF. Prior to assessing the in vitro NGF release from NC, various release test media, with and without stabilizers for NGF, were evaluated to ensure adequate quantification of NGF by ELISA. Citrate (pH 5.0) and acetate (pH 5.5) buffered saline solutions containing 0.05% Tween 20 yielded the most reliable results for ELISA active NGF. The in vitro release experiments revealed that the best results in terms of reproducibility and release control were achieved when the NGF was embedded between two PLGA layers and the ends of the NC tightly sealed by the PLGA coatings. The release kinetics could be efficiently adjusted by accommodating NGF at different radial locations within the NC. A sustained release of bioactive NGF in the low nanogram per day range was obtained for at least 15days. In conclusion, the developed multi-ply NGF loaded NC is considered a suitable candidate for future implantation studies to gain insight into the relationship between local growth factor availability and nerve regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND CONTEXT: Studies involving factor analysis (FA) of the items in the North American Spine Society (NASS) outcome assessment instrument have revealed inconsistent factor structures for the individual items. PURPOSE: This study examined whether the factor structure of the NASS varied in relation to the severity of the back/neck problem and differed from that originally recommended by the developers of the questionnaire, by analyzing data before and after surgery in a large series of patients undergoing lumbar or cervical disc arthroplasty. STUDY DESIGN/SETTING: Prospective multicenter observational case series. PATIENT SAMPLE: Three hundred ninety-one patients with low back pain and 553 patients with neck pain completed questionnaires preoperatively and again at 3 to 6 and 12 months follow-ups (FUs), in connection with the SWISSspine disc arthroplasty registry. OUTCOME MEASURES: North American Spine Society outcome assessment instrument. METHODS: First, an exploratory FA without a priori assumptions and subsequently a confirmatory FA were performed on the 17 items of the NASS-lumbar and 19 items of the NASS-cervical collected at each assessment time point. The item-loading invariance was tested in the German version of the questionnaire for baseline and FU. RESULTS: Both NASS-lumbar and NASS-cervical factor structures differed between baseline and postoperative data sets. The confirmatory analysis and item-loading invariance showed better fit for a three-factor (3F) structure for NASS-lumbar, containing items on "disability," "back pain," and "radiating pain, numbness, and weakness (leg/foot)" and for a 5F structure for NASS-cervical including disability, "neck pain," "radiating pain and numbness (arm/hand)," "weakness (arm/hand)," and "motor deficit (legs)." CONCLUSIONS: The best-fitting factor structure at both baseline and FU was selected for both the lumbar- and cervical-NASS questionnaires. It differed from that proposed by the originators of the NASS instruments. Although the NASS questionnaire represents a valid outcome measure for degenerative spine diseases, it is able to distinguish among all major symptom domains (factors) in patients undergoing lumbar and cervical disc arthroplasty; overall, the item structure could be improved. Any potential revision of the NASS should consider its factorial structure; factorial invariance over time should be aimed for, to allow for more precise interpretations of treatment success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.