976 resultados para One-stage


Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: To evaluate the safety and the efficacy of imatinib in recurrent malignant gliomas. PATIENTS: AND METHODS: This was a single-arm, phase II study. Eligible patients had recurrent glioma after prior radiotherapy with an enhancing lesion on magnetic resonance imaging. Three different histologic groups were studied: glioblastomas (GBM), pure/mixed (anaplastic) oligodendrogliomas (OD), and low-grade or anaplastic astrocytomas (A). Imatinib was started at a dose of 600 mg/d with dose escalation to 800 mg in case of no toxicity; during the trial this dose was increased to 800 mg/d with escalation to 1,000 mg/d. Trial design was one-stage Fleming; both an objective response and 6 months of progression-free survival (PFS) were considered a successful outcome to treatment. RESULTS: A total of 112 patients (51 patients with GBM, 25 patients with A, and 36 patients with OD) were enrolled. Imatinib was in general well tolerated. The median number of cycles was 2.0 (range, 1 to 43 cycles). Five patients had an objective partial response, including three patients with GBM; all had 6 months of PFS. The 6-month PFS rate was 16% (95% CI, 8.0% to 34.0%) in GBM, 4.0% (95% CI, 0.3% to 15.0%) in OD, and 9% (95% CI, 2.0% to 25.0%) in A. The exposure to imatinib was significantly lower in patients using enzyme-inducing antiepileptic drugs. The presence of ABCG2 point mutations were not correlated with pharmacokinetic findings. No somatic activating mutations of KIT or platelet-derived growth factor receptor-A or -B were found. CONCLUSION: In the dose range of 600 to 1,000 mg/d, single-agent imatinib is well tolerated but has limited antitumor activity in patients with recurrent gliomas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper aims to estimate a translog stochastic frontier production function in the analysis of a panel of 150 mixed Catalan farms in the period 1989-1993, in order to attempt to measure and explain variation in technical inefficiency scores with a one-stage approach. The model uses gross value added as the output aggregate measure. Total employment, fixed capital, current assets, specific costs and overhead costs are introduced into the model as inputs. Stochasticfrontier estimates are compared with those obtained using a linear programming method using a two-stage approach. The specification of the translog stochastic frontier model appears as an appropriate representation of the data, technical change was rejected and the technical inefficiency effects were statistically significant. The mean technical efficiency in the period analyzed was estimated to be 64.0%. Farm inefficiency levels were found significantly at 5%level and positively correlated with the number of economic size units.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Prosthetic joint infections (PJI) lead to significant long-term morbidity with high cost of healthcare. We evaluated characteristics of infections and the infection and functional outcome of knee PJI over a 10-year period. Methods: All patients hospitalized at our institution from 1/2000 through 12/2009 with knee PJI (defined as growth of the same microorganism in ≥2 tissue or synovial fluid cultures, visible purulence, sinus tract or acute inflammation on tissue histopathology) were included. Patients, their relatives and/or treating physicians were contacted to determine the outcome. Results: During the study period, 61 patients with knee PJI were identified. The median age at the time of diagnosis of infection was 73 y (range, 53-94 y); 52% were men. Median hospital stay was 37 d (range, 1-145 d). Most reasons for primary arthroplasty was osteoarthritis (n = 48), trauma (n = 9) and rheumatoid arthritis (n = 4). 23 primary surgeries (40%) were performed at CHUV, 34 (60%) elsewhere. After surgery, 8 PJI were early (<3 months), 16 delayed (3-24 months) and 33 late (>24 months). PJI were treated with (i) open or arthroscopic debridement with prosthesis retention in 26 (46%), (ii) one-stage exchange in 1, (iii) two-stage exchange in 22 (39%) and (iv) prosthesis removal in 8 (14%). Isolated pathogens were S. aureus (13), coagulase-negative staphylococci (10), streptococci (5), enterococci (3), gram-negative rods (3) and anaerobes (3). Patients were followed for a median of 3.1 years, 2 patients died (unrelated to PJI). The outcome of infection was favorable in 50 patients (88%), whereas the functional outcome was favorable in 33 patients (58%). Conclusions: With the current treatment concept, the high cure rate of infection (88%) is associated with a less favorable functional outcome o 58%. Earlier surgical intervention and more rapid and improved diagnosis of infection may improve the functional outcome of PJI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Infection of total hip arthroplasties (THA) leads to significant long-termmorbidity and high healthcare costs. We evaluated the differentreasons for treatment failure using different surgical modalities in a12-year prosthetic joint infection cohort study.Method: All patients hospitalized at our institution with infected THAwere included either retrospectively (1999-2007) or prospectively(2008-2010). THA infection was defined as growth of the same microorganismin ≥2 tissue or synovial fluid culture, visible purulence, sinustract or acute inflammation on tissue histopathology. Outcome analysiswas performed at outpatient visits, followed by contacting patients,their relatives and/or treating physicians afterwards.Results: During the study period, 117 patients with THA were identified.We exclude 2 patients due to missing data. The median age was69 years (range, 33-102 years); 42% were women. THA was mainlyperformed for osteoarthritis (n = 84), followed by trauma (n = 22),necrosis (n = 4), dysplasia (n = 2), rheumatoid arthritis (n = 1), osteosarcoma(n = 1) and tuberculosis (n = 1). 28 infections occurred early(≤3 months), 25 delayed (3-24 months) and 63 late (≥24 months aftersurgery). Infected THA were treated with (i) two-stage exchange in59 patients (51%, cure rate: 93%), (ii) one-stage exchange in 5 (4.3%,cure rate: 100%), (iii) debridement with change of mobile parts in18 (17%, cure rate: 83%), (iv) debridement without change of mobileparts in 17 (14%, cure rate: 53% ), (v) Girdlestone in 13 (11%, curerate: 100%), and (vi) two-stage exchange followed by removal in 3(2.6%). Patients were followed for a mean of 3.9 years (range, 0.1 to 9years), 7 patients died unrelated to the infected THA. 15 patients (13%)needed additional operations, 1 for mechanical reasons (dislocationof spacer) and 14 for persistent infection: 11 treated with debridementand retention (8 without change and 3 with change of mobile parts)and 3 with two-stage exchange. The mean number of surgery was 2.2(range, 1 to 5). The infection was finally eradicated in all patients, butthe functional outcome remained unsatisfactory in 20% (persistentpain or impaired mobility due to spacer or Girdlestone situation).Conclusions: Non-respect of current treatment concept leads totreatment failure with subsequent operations. Precise analysis of eachtreatment failures can be used for improving the treatment algorithmleading to better results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: Controversy still exists as to the best surgical treatment for periprosthetic shoulder infections. The aim of this multi-institutional study was to review a continuous retrospective series of patients treated in four European centres and to assess the respective eradication rate of various treatment approaches. METHODS: Forty-four patients were available for this retrospective follow-up evaluation. Functional and clinical evaluation of treatment for infection was performed using the Constant-Murley score, visual analogue scale and patient satisfaction Neer score. Erythrocyte sedimentation rate, serum leucocyte count and C-reactive protein were measured and shoulder X-ray examination performed prior to surgery and at the latest follow-up. RESULTS: At a mean follow-up of 41 months (range 24-98), 42 of 44 patients (95.5%) showed no signs of infection recurrence/persistence. Comparable eradication rates were observed after resection arthroplasty (100%; 6/6), two-stage revision (17/17) or permanent antibiotic-loaded spacer implant (93.3%; 14/15). No patient was treated by one-stage revision. On average, both functional and pain scores improved significantly; the worst joint function was observed after resection arthroplasty. CONCLUSIONS: This retrospective analysis conducted on the largest published series of patients to date shows comparable infection eradication rates after two-stage revision, resection arthroplasty or permanent spacer implant for the treatment of septic shoulder prosthesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tässä diplomityössä suunnitellaan yksivaiheisen turbiinin ylisooninen staattori ja alisooninen roottori, tulo-osa ja diffuusori. Työn alussa tarkastellaan aksiaaliturbiinin käyttökohteita ja teoriaa, jonka jälkeen esitetään suunnittelun perustana olevat menetelmät ja periaatteet. Perussuunnittelu tehdään Traupelinmenetelmällä WinAxtu 1.1 suunnitteluohjelmalla ja hyötysuhde arvioidaan lisäksiExcel-pohjaisella laskennalla. Ylisooninen staattori suunnitellaan perussuunnittelun tuloksiin perustuen, soveltamalla karakteristikoiden menetelmää suuttimen laajenevaan osaan ja pinta-alasuhteita suppenevaan osaan. Roottorin keskiviiva piirretään Sahlbergin menetelmällä ja siiven muoto määritetään A3K7 paksuusjakauman sekä tiheän siipihilan muotoilun periaatteita yhdistämällä. Tulo-osa suunnitellaan mahdollisimman jouhevaksi geometriatietojen ja kirjallisuuden esimerkkien mukaisesti. Lopuksi tulo-osaa mallinnetaan CFD-laskennalla. Diffuusori suunnitellaan käyttämällä soveltuvin osin kirjallisuudessa esitettyjätietoja, tulo-osan geometriaa ja CFD-laskentaa. Suunnittelutuloksia verrataan lopuksi kirjallisuudessa esitettyihin tuloksiin ja arvioidaan suunnittelun onnistumista sekä mahdollisia ongelmakohtia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis membrane filtration of paper machnie clear filtrate was studied. The aim of the study was to find membrane processes which are able to produce economically water of sufficient purity from paper machine white water or its saveall clarified fractions for reuse in the paper machnie short circulation. Factors affecting membrane fouling in this application were also studied. The thesis gives an overview af experiments done on a laboratory and a pilot scale with several different membranes and membrane modules. The results were judged by the obtained flux, the fouling tendency and the permeate quality assessed with various chemical analyses. It was shown that membrane modules which used a turbulence promotor of some kind gave the highest fluexes. However, the results showed that the greater the reduction in the concentration polarisation layer caused by increased turbulence in the module, the smaller the reductions in measured substances. Out of the micro-, ultra- and nanofiltration membranes tested, only nanofiltration memebranes produced permeate whose quality was very close to that of the chemically treated raw water used as fresh water in most paper mills today and which should thus be well suited for reuse as shower water both in the wire and press section. It was also shown that a one stage nanofiltration process was more effective than processes in which micro- or ultrafiltration was used as pretreatment for nanofiltration. It was generally observed that acidic pH, high organic matter content, the presence of multivalent ions, hydrophobic membrane material and high membrane cutoff increased the fouling tendency of the membranes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work describes the liquid-liquid extraction of uranium after digestion of colofanite (a fluoroapatite) from Itataia with sulfuric acid. The experiments were run at room temperature in one stage. Among the solutions tested the highest distribution coefficient (D > 60) was found for 40%vol. DEHPA (di(2-ethyl-hexyl)phosphoric acid) + 20% vol. TOPO (trioctylphosphine oxide) in kerosene. Thorium in the raffinate was quantitatively extracted by TOPO (0.1% vol.) in cyclohexane. Uranium stripping and separation from iron was possible using 1.5 mol L-1 ammonium or sodium carbonate (room temperature, one stage). However, pH control is essential for a good separation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spent oxidized (500 ºC, 5 h) commercial NiW/Al2O3 catalysts were processed using two different routes: a) fusion with NaOH (650 ºC, 1 h), the roasted mass was leached in water; b) leaching with HCl or H2SO4 (70 ºC, 1-3 h). HCl was the best leachant. In both routes, soluble tungsten was extracted at pH 1 with Alamine 336 (10 vol.% in kerosene) and stripped with 2 mol L-1 NH4OH (25 ºC, one stage, aqueous/organic ratio = 1 v/v). Tungsten was isolated as ammonium paratungstate at very high yield (> 97.5%). The elements were better separated using the acidic route.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The demand for more efficient manufacturing processes has been increasing in the last few years. The cold forging process is presented as a possible solution, because it allows the production of parts with a good surface finish and with good mechanical properties. Nevertheless, the cold forming sequence design is very empirical and it is based on the designer experience. The computational modeling of each forming process stage by the finite element method can make the sequence design faster and more efficient, decreasing the use of conventional "trial and error" methods. In this study, the application of a commercial general finite element software - ANSYS - has been applied to model a forming operation. Models have been developed to simulate the ring compression test and to simulate a basic forming operation (upsetting) that is applied in most of the cold forging parts sequences. The simulated upsetting operation is one stage of the automotive starter parts manufacturing process. Experiments have been done to obtain the stress-strain material curve, the material flow during the simulated stage, and the required forming force. These experiments provided results used as numerical model input data and as validation of model results. The comparison between experiments and numerical results confirms the developed methodology potential on die filling prediction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La violence conjugale est un phénomène criminel fréquent au Québec. En 2008, les infractions commises en contexte conjugal représentaient plus de 20 % des crimes contre la personne signalés à la police (Ministère de la Sécurité publique, 2010). L’intervention policière et judiciaire en contexte conjugal est complexe, notamment en raison du lien unissant l’agresseur et la victime. Bien que le pouvoir discrétionnaire des intervenants judiciaires en contexte conjugal ait été grandement limité au cours des dernières décennies, ceux-ci bénéficient toujours d’une certaine latitude dans leur décision de poursuivre, ou non, différentes étapes du processus judiciaire. Au fil du temps, plusieurs études se sont intéressées aux éléments influençant la prise de décision en contexte conjugal. Cependant, celles-ci ne portent généralement que sur une seule étape du processus et certains facteurs décisionnels n’ont jamais été testés empiriquement. C’est notamment le cas des éléments liés aux stéréotypes de la violence conjugale. Certains auteurs mentionnent que les incidents qui ne correspondent pas au stéréotype de l’agresseur masculin violentant une victime qualifiée d’irréprochable et d’innocente font l’objet d’un traitement judiciaire plus sommaire, mais ces affirmations ne reposent, à notre connaissance, sur aucune donnée empirique. Cette étude tente de vérifier cette hypothèse en examinant l’impact de ces éléments sur cinq décisions policières et judiciaires. À partir d’une analyse de contenu quantitative de divers documents liés au cheminement judiciaire de 371 incidents commis en contexte conjugal sur le territoire du Centre opérationnel Nord du Service de police de la Ville de Montréal en 2008, la thèse examine l’utilisation du pouvoir discrétionnaire dans le traitement judiciaire de ces incidents. Elle comporte trois objectifs spécifiques. Le premier objectif permet la description du cheminement judiciaire des incidents commis en contexte conjugal. Nos résultats indiquent que ceux-ci font l’objet d’un traitement plus punitif puisqu’ils font plus fréquemment l’objet de procédures à la cour que les autres types de crimes. Cette judiciarisation plus systématique pourrait expliquer le faible taux de condamnation de ceux-ci (17,2 %). Le second objectif permet la description des principales caractéristiques de ces incidents. La majorité implique des gestes de violence physique et les policiers interviennent généralement auprès de conjoints actuels. La plupart des victimes rapportent la présence de violences antérieures au sein du couple et le tiers veulent porter plainte contre le suspect. Finalement, 78 % des incidents impliquent un agresseur masculin et une victime féminine et 14,29 % des victimes sont soupçonnées d’avoir posé le premier geste hostile ou violent lors de l’incident. Le dernier objectif permet l’identification des principaux éléments associés aux décisions prises en contexte conjugal. Les résultats confirment l’hypothèse selon laquelle les incidents n’impliquant pas un agresseur masculin et une victime féminine ou ceux dont les policiers soupçonnent la victime d’avoir posé le premier geste hostile ou violent font l’objet d’un traitement judiciaire plus sommaire. En outre, la majorité des facteurs décisionnels étudiés perdent de leur influence au cours du processus judiciaire et les décisions prises précédemment influencent fortement les décisions subséquentes. Finalement, le désir de porter plainte de la victime n’influence pas directement les décisions des intervenants judiciaires.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introducción: la luxación típica de la cadera es una patología con alta incidencia, de ahí la necesidad de métodos efectivos para lograr una reducción de esta. Han surgido varios métodos de reducción, uno de estos métodos es la reducción abierta por vía interna. Metodología Estudió descriptivo retrospectivo, incluyo los casos operados de reducción abierta por vía interna en el Instituto de Ortopedia Infantil Roosevelt y Clínica Jorge Piñeros Corpas por uno de los tutores, desde enero de 2006 hasta junio de 2011, valorando estas caderas según los criterios de Salter de necrosis avascular (NAV), con un seguimiento mínimo de 18 meses. Se evaluó la concordancia interobservador para la clasificación de Salter en tres ortopedistas infantiles. Resultados Se evaluaron 20 caderas en 16 pacientes a los que se realizo reducción abierta de luxación típica de cadera por vía interna. El 40 % de las caderas presentaron NAV, el 75% de estas caderas presentan NAV tipo I según la clasificación de Kalamchi. El índice Kappa para la clasificación de Salter en tres ortopedistas infantiles fue 0.6. Discusión El abordaje por vía interna para la reducción de luxación de cadera típica en niños menores de 18 meses es una alternativa más para el manejo de estos pacientes, que puede producir NAV en la cadera, pero esta NAV es tipo I de Kalamchi en la mayoría de casos. La reproducibilidad de la radiografía para la evaluación de NAV, realizada por personas expertas es buena, medida con índice kappa de 0.6.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the role of natural resource windfalls in explaining the efficiency of public expenditures. Using a rich dataset of expenditures and public good provision for 1,836 municipalities in Peru for period 2001-2010, we estimate a non-monotonic relationship between the efficiency of public good provision and the level of natural resource transfers. Local governments that were extremely favored by the boom of mineral prices were more efficient in using fiscal windfalls whereas those benefited with modest transfers were more inefficient. These results can be explained by the increase in political competition associated with the boom. However, the fact that increases in efficiency were related to reductions in public good provision casts doubts about the beneficial effects of political competition in promoting efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The first haploid angiosperm, a dwarf form of cotton with half the normal chromosome complement, was discovered in 1920, and in the ninety years since then such plants have been identified in many other species. They can occur either spontaneously or can be induced by modified pollination methods in vivo, or by in vitro culture of immature male or female gametophytes. Haploids represent an immediate, one-stage route to homozygous diploids and thence to F(1) hybrid production. The commercial exploitation of heterosis in such F(1) hybrids leads to the development of hybrid seed companies and subsequently to the GM revolution in agriculture. This review describes the range of techniques available for the isolation or induction of haploids and discusses their value in a range of areas, from fundamental research on mutant isolation and transformation, through to applied aspects of quantitative genetics and plant breeding. It will also focus on how molecular methods have been used recently to explore some of the underlying aspects of this fascinating developmental phenomenon.