941 resultados para Construct
Resumo:
Research on Public Service Motivation (PSM) has increased enormously in the last 20 years. Besides the analysis of the antecedents of PSM and its impact on organizations and individuals, many open questions about the nature of PSM itself still remain. This article argues that the theoretical construct of PSM should be contextualized by integrating the political and administrative contexts of public servants when investigating their specific attitudes towards working in a public environment. It also challenges the efficacy of the classic four-dimensional structure of PSM when it is applied to a specific context. The findings of a confirmatory factor analysis from a dataset of 3754 employees of 279 Swiss municipalities support the appropriateness of contextualizing parts of the PSM construct. They also support the addition of an extra dimension called, according to previous research, Swiss democratic governance. With regard to our results, there is a need for further PSM research to set a definite measure of PSM, particularly in regard to the international diffusion of empirical research on PSM.Points for practitionersThis study shows that public service motivation is a relevant construct for practitioners and may be used to better assess whether public agents are motivated by values or not. Nevertheless, it stresses also that the measurement of PSM must be adapted to the institutional context as well. Public managers interested in understanding better the degree to which their employees are motivated by public values must be aware that the measurement of this PSM construct has to be contextualized. In other words, PSM is also a function of the institutional environment in which organizations operate.
Resumo:
Introduction Functional subjective evaluation through questionnaire is fundamental, but not often realized in patients with back complaints, lacking validated tools. The Spinal Function Sort (SFS) was only validated in English. We aimed to translate, adapt and validate the French (SFS-F) and German (SFS-G) versions of the SFS. Methods Three hundred and forty-four patients, experiencing various back complaints, were recruited in a French (n = 87) and a German-speaking (n = 257) center. Construct validity was estimated via correlations with SF-36 physical and mental scales, pain intensity and hospital anxiety and depression scales (HADS). Scale homogeneities were assessed by Cronbach's α. Test-retest reliability was assessed on 65 additional patients using intraclass correlation (IC). Results For the French and German translations, respectively, α were 0.98 and 0.98; IC 0.98 (95% CI: [0.97; 1.00]) and 0.94 (0.90; 0.98). Correlations with physical functioning were 0.63 (0.48; 0.74) and 0.67 (0.59; 0.73); with physical summary 0.60 (0.44; 0.72) and 0.52 (0.43; 0.61); with pain -0.33 (-0.51; -0.13) and -0.51 (-0.60; -0.42); with mental health -0.08 (-0.29; 0.14) and 0.25 (0.13; 0.36); with mental summary 0.01 (-0.21; 0.23) and 0.28 (0.16; 0.39); with depression -0.26 (-0.45; -0.05) and -0.42 (-0.52; -0.32); with anxiety -0.17 (-0.37; -0.04) and -0.45 (-0.54; -0.35). Conclusions Reliability was excellent for both languages. Convergent validity was good with SF-36 physical scales, moderate with VAS pain. Divergent validity was low with SF-36 mental scales in both translated versions and with HADS for the SFS-F (moderate in SFS-G). Both versions seem to be valid and reliable for evaluating perceived functional capacity in patients with back complaints.
Resumo:
Estrogen receptors regulate transcription of genes essential for sexual development and reproductive function. Since the retinoid X receptor (RXR) is able to modulate estrogen responsive genes and both 9-cis RA and fatty acids influenced development of estrogen responsive tumors, we hypothesized that estrogen responsive genes might be modulated by RXR and the fatty acid receptor (peroxisome proliferator-activated receptor, PPAR). To test this hypothesis, transfection assays in CV-1 cells were performed with an estrogen response element (ERE) coupled to a luciferase reporter construct. Addition of expression vectors for RXR and PPAR resulted in an 11-fold increase in luciferase activity in the presence of 9-cis RA. Furthermore, mobility shift assays demonstrated binding of RXR and PPAR to the vitellogenin A2-ERE and an ERE in the oxytocin promoter. Methylation interference assays demonstrated that specific guanine residues required for RXR/PPAR binding to the ERE were similar to residues required for ER binding. Moreover, RXR domain-deleted constructs in transfection assays showed that activation required RXR since an RXR delta AF-2 mutant completely abrogated reporter activity. Oligoprecipitation binding studies with biotinylated ERE and (35)S-labeled in vitro translated RXR constructs confirmed binding of delta AF-2 RXR mutant to the ERE in the presence of baculovirus-expressed PPAR. Finally, in situ hybridization confirmed RXR and PPAR mRNA expression in estrogen responsive tissues. Collectively, these data suggest that RXR and PPAR are present in reproductive tissues, are capable of activating estrogen responsive genes and suggest that the mechanism of activation may involve direct binding of the receptors to estrogen response elements.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Rationale: Clinical and electrophysiological prognostic markers of brain anoxia have been mostly evaluated in comatose survivors of out hospital cardiac arrest (OHCA) after standard resuscitation, but their predictive value in patients treated with mild induced hypothermia (IH) is unknown. The objective of this study was to identify a predictive score of independent clinical and electrophysiological variables in comatose OHCA survivors treated with IH, aiming at a maximal positive predictive value (PPV) and a high negative predictive value (NPV) for mortality. Methods: We prospectively studied consecutive adult comatose OHCA survivors from April 2006 to May 2009, treated with mild IH to 33-34_C for 24h at the intensive care unit of the Lausanne University Hospital, Switzerland. IH was applied using an external cooling method. As soon as subjects passively rewarmed (body temperature >35_C) they underwent EEG and SSEP recordings (off sedation), and were examined by experienced neurologists at least twice. Patients with status epilepticus were treated with AED for at least 24h. A multivariable logistic regression was performed to identify independent predictors of mortality at hospital discharge. These were used to formulate a predictive score. Results: 100 patients were studied; 61 died. Age, gender and OHCA etiology (cardiac vs. non-cardiac) did not differ among survivors and nonsurvivors. Cardiac arrest type (non-ventricular fibrillation vs. ventricular fibrillation), time to return of spontaneous circulation (ROSC) >25min, failure to recover all brainstem reflexes, extensor or no motor response to pain, myoclonus, presence of epileptiform discharges on EEG, EEG background unreactive to pain, and bilaterally absent N20 on SSEP, were all significantly associated with mortality. Absent N20 was the only variable showing no false positive results. Multivariable logistic regression identified four independent predictors (Table). These were used to construct the score, and its predictive values were calculated after a cut-off of 0-1 vs. 2-4 predictors. We found a PPV of 1.00 (95% CI: 0.93-1.00), a NPV of 0.81 (95% CI: 0.67-0.91) and an accuracy of 0.93 for mortality. Among 9 patients who were predicted to survive by the score but eventually died, only 1 had absent N20. Conclusions: Pending validation in a larger cohort, this simple score represents a promising tool to identify patients who will survive, and most subjects who will not, after OHCA and IH. Furthermore, while SSEP are 100% predictive of poor outcome but not available in most hospitals, this study identifies EEG background reactivity as an important predictor after OHCA. The score appears robust even without SSEP, suggesting that SSEP and other investigations (e.g., mismatch negativity, serum NSE) might be principally needed to enhance prognostication in the small subgroup of patients failing to improve despite a favorable score.
Resumo:
Superparamagnetic iron oxide nanoparticles (SPIONs) are in clinical use for disease detection by MRI. A major advancement would be to link therapeutic drugs to SPIONs in order to achieve targeted drug delivery combined with detection. In the present work, we studied the possibility of developing a versatile synthesis protocol to hierarchically construct drug-functionalized-SPIONs as potential anti-cancer agents. Our model biocompatible SPIONs consisted of an iron oxide core (9-10 nm diameter) coated with polyvinylalcohols (PVA/aminoPVA), which can be internalized by cancer cells, depending on the positive charges at their surface. To develop drug-functionalized-aminoPVA-SPIONs as vectors for drug delivery, we first designed and synthesized bifunctional linkers of varied length and chemical composition to which the anti-cancer drugs 5-fluorouridine or doxorubicin were attached as biologically labile esters or peptides, respectively. These functionalized linkers were in turn coupled to aminoPVA by amide linkages before preparing the drug-functionalized-SPIONs that were characterized and evaluated as anti-cancer agents using human melanoma cells in culture. The 5-fluorouridine-SPIONs with an optimized ester linker were taken up by cells and proved to be efficient anti-tumor agents. While the doxorubicin-SPIONs linked with a Gly-Phe-Leu-Gly tetrapeptide were cleaved by lysosomal enzymes, they exhibited poor uptake by human melanoma cells in culture.
Resumo:
Tissue-engineered grafts for the urinary tract are being investigated for the potential treatment of several urologic diseases. These grafts, predominantly tubular-shaped, usually require in vitro culture prior to implantation to allow cell engraftment on initially cell-free scaffolds. We have developed a method to produce tubular-shaped collagen scaffolds based on plastic compression. Our approach produces a ready cell-seeded graft that does not need further in vitro culture prior to implantation. The tubular collagen scaffolds were in particular investigated for their structural, mechanical and biological properties. The resulting construct showed an especially high collagen density, and was characterized by favorable mechanical properties assessed by axial extension and radial dilation. Young modulus in particular was greater than non-compressed collagen tubes. Seeding densities affected proliferation rate of primary human bladder smooth muscle cells. An optimal seeding density of 10(6) cells per construct resulted in a 25-fold increase in Alamar blue-based fluorescence after 2 wk in culture. These high-density collagen gel tubes, ready seeded with smooth muscle cells could be further seeded with urothelial cells, drastically shortening the production time of graft for urinary tract regeneration.
Resumo:
In cells, DNA is routinely subjected to significant levels of bending and twisting. In some cases, such as under physiological levels of supercoiling, DNA can be so highly strained, that it transitions into non-canonical structural conformations that are capable of relieving mechanical stress within the template. DNA minicircles offer a robust model system to study stress-induced DNA structures. Using DNA minicircles on the order of 100 bp in size, we have been able to control the bending and torsional stresses within a looped DNA construct. Through a combination of cryo-EM image reconstructions, Bal31 sensitivity assays and Brownian dynamics simulations, we have been able to analyze the effects of biologically relevant underwinding-induced kinks in DNA on the overall shape of DNA minicircles. Our results indicate that strongly underwound DNA minicircles, which mimic the physical behavior of small regulatory DNA loops, minimize their free energy by undergoing sequential, cooperative kinking at two sites that are located about 180° apart along the periphery of the minicircle. This novel form of structural cooperativity in DNA demonstrates that bending strain can localize hyperflexible kinks within the DNA template, which in turn reduces the energetic cost to tightly loop DNA.
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
This paper aims at reconsidering some analytical measures to best encapsulate the interlanguage, in writing, of young beginner learners of English as a foreign language in the light of previous and work-in-progress research conducted within the BAF project, and in particular, whether clause and sentence length should be best viewed as a fluency or syntactic complexity measusre or as part of a different construct. In the light of a factor analysis (Navés, forthcoming) and multivariate and correlation studies (Navés et al. 2003, Navés, 2006, Torres et al. 2006) it becomes clear that the relationship between different analytical measures is also dependent on learner¿s cognitive maturity (age) and proficiency (amount of instruction). Finally, clause and sentence length should not be viewed as either a fluency or sytactic complexity measure but as part of a different construct. It is concluded that further research using regression analysis and cluster analysis is neeed in order to identify and validate the constructs of the writing components and their measurements.
Resumo:
The aim of this paper is to analyse how economic integration in Europe has affected industrial geographical concentration in Spain and explain what the driving forces behind industry location are. Firstly, we construct regional specialisation and geographical concentration indices for Spanish 50 provinces and 30 industrial sectors in 1979, 1986 and 1992. Secondly, we carry out an econometric analysis of the determinants of geographical concentration of industries. Our main conclusion is that there is no evidence of increasing specialisation in Spain between 1979 and 1992 and that the most important determinant of Spain¿s economic geography is scale economies. Furthermore, traditional trade theory has no effects in explaining the pattern of industrial concentration
Resumo:
New economic geography models show that there may be a strong relationship between economic integration and the geographical concentration of industries. Nevertheless, this relationship is neither unique nor stable, and may follow a ?-shaped pattern in the long term. The aim of the present paper is to analyze the evolution of the geographical concentration of manufacturing across Spanish regions during the period 1856-1995. We construct several geographical concentration indices for different points in time over these 140 years. The analysis is carried out at two levels of aggregation, in regions corresponding to the NUTS-II and NUTS-III classifications. We confirm that the process of economic integration stimulated the geographical concentration of industrial activity. Nevertheless, the localization coefficients only started to fall after the beginning of the integration of the Spanish Economy into the international markets in the mid-70s, and this new path was not interrupted by Spain¿s entry in the European Union some years later
Resumo:
Given a compact pseudo-metric space, we associate to it upper and lower dimensions, depending only on the pseudo-metric. Then we construct a doubling measure for which the measure of a dilated ball is closely related to these dimensions.
Resumo:
The need to construct bridges that last longer, are less expensive, and take less time to build has increased. The importance of accelerated bridge construction (ABC) technologies has been realized by the Federal Highway Administration (FHWA) and the Iowa Department of Transportation (DOT) Office of Bridges and Structures. This project is another in a series of ABC bridge projects undertaken by the Iowa DOT. Buena Vista County, Iowa, with the assistance of the Iowa Department of Transportation (DOT) and the Bridge Engineering Center (BEC) at Iowa State University, constructed a two-lane single-span precast box girder bridge, using rapid construction techniques. The design involved the use of precast, pretensioned components for the bridge superstructure, substructure, and backwalls. This application and demonstration represents an important step in the development and advancement of these techniques in Iowa as well as nationwide. Prior funding for the design and construction of this bridge (including materials) was obtained through the FHWA Innovative Bridge Research and Deployment (IBRD) Program. The Iowa Highway Research Board (IHRB) provided additional funding to test and evaluate the bridge. This project directly addresses the IBRD goal of demonstrating (and documenting) the effectiveness of innovative materials and construction techniques for the construction of new bridge structures. Evaluation of performance was formulated through comparisons with design assumptions and recognized codes and standards including American Association of State Highway and Transportation Officials (AASHTO) specifications.
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call