862 resultados para Weights and measures, Arab.
Resumo:
INTRODUCTION Significant pulmonary vascular disease is a leading cause of death in patients with scleroderma, and early detection and early medical intervention are important, as they may delay disease progression and improve survival and quality of life. Although several biomarkers have been proposed, there remains a need to define a reliable biomarker of early pulmonary vascular disease and subsequent development of pulmonary hypertension (PH). The purpose of this study was to define potential biomarkers for clinically significant pulmonary vascular disease in patients with scleroderma. METHODS The circulating growth factors basic fibroblast growth factor, placental growth factor (PlGF), vascular endothelial growth factor (VEGF), hepatocyte growth factor, and soluble VEGF receptor 1 (sFlt-1), as well as cytokines (interleukin [IL]-1β IL-2, IL-4, IL-5, IL-8, IL-10, IL-12, IL-13, tumor necrosis factor-α, and interferon-γ), were quantified in patients with scleroderma with PH (n = 37) or without PH (n = 40). In non-parametric unadjusted analyses, we examined associations of growth factor and cytokine levels with PH. In a subset of each group, a second set of earlier samples, drawn 3.0±1.6 years earlier, were assessed to determine the changes over time. RESULTS sFlt-1 (p = 0.02) and PlGF (p = 0.02) were higher in the PH than in the non-PH group. sFlt-1 (ρ = 0.3245; p = 0.01) positively correlated with right ventricular systolic pressure. Both PlGF (p = 0.03) and sFlt-1 (p = 0.04) positively correlated with the ratio of forced vital capacity to diffusing capacity for carbon monoxide (DLCO), and both inversely correlated with DLCO (p = 0.01). Both PlGF and sFlt-1 levels were stable over time in the control population. CONCLUSIONS Our study demonstrated clear associations between regulators of angiogenesis (sFlt-1 and PlGF) and measures of PH in scleroderma and that these growth factors are potential biomarkers for PH in patients with scleroderma. Larger longitudinal studies are required for validation of our results.
Resumo:
IMPORTANCE Despite antirestenotic efficacy of coronary drug-eluting stents (DES) compared with bare metal stents (BMS), the relative risk of stent thrombosis and adverse cardiovascular events is unclear. Although dual antiplatelet therapy (DAPT) beyond 1 year provides ischemic event protection after DES, ischemic event risk is perceived to be less after BMS, and the appropriate duration of DAPT after BMS is unknown. OBJECTIVE To compare (1) rates of stent thrombosis and major adverse cardiac and cerebrovascular events (MACCE; composite of death, myocardial infarction, or stroke) after 30 vs 12 months of thienopyridine in patients treated with BMS taking aspirin and (2) treatment duration effect within the combined cohorts of randomized patients treated with DES or BMS as prespecified secondary analyses. DESIGN, SETTING, AND PARTICIPANTS International, multicenter, randomized, double-blinded, placebo-controlled trial comparing extended (30-months) thienopyridine vs placebo in patients taking aspirin who completed 12 months of DAPT without bleeding or ischemic events after receiving stents. The study was initiated in August 2009 with the last follow-up visit in May 2014. INTERVENTIONS Continued thienopyridine or placebo at months 12 through 30 after stent placement, in 11,648 randomized patients treated with aspirin, of whom 1687 received BMS and 9961 DES. MAIN OUTCOMES AND MEASURES Stent thrombosis, MACCE, and moderate or severe bleeding. RESULTS Among 1687 patients treated with BMS who were randomized to continued thienopyridine vs placebo, rates of stent thrombosis were 0.5% vs 1.11% (n = 4 vs 9; hazard ratio [HR], 0.49; 95% CI, 0.15-1.64; P = .24), rates of MACCE were 4.04% vs 4.69% (n = 33 vs 38; HR, 0.92; 95% CI, 0.57-1.47; P = .72), and rates of moderate/severe bleeding were 2.03% vs 0.90% (n = 16 vs 7; P = .07), respectively. Among all 11,648 randomized patients (both BMS and DES), stent thrombosis rates were 0.41% vs 1.32% (n = 23 vs 74; HR, 0.31; 95% CI, 0.19-0.50; P < .001), rates of MACCE were 4.29% vs 5.74% (n = 244 vs 323; HR, 0.73; 95% CI, 0.62-0.87; P < .001), and rates of moderate/severe bleeding were 2.45% vs 1.47% (n = 135 vs 80; P < .001). CONCLUSIONS AND RELEVANCE Among patients undergoing coronary stent placement with BMS and who tolerated 12 months of thienopyridine, continuing thienopyridine for an additional 18 months compared with placebo did not result in statistically significant differences in rates of stent thrombosis, MACCE, or moderate or severe bleeding. However, the BMS subset may have been underpowered to identify such differences, and further trials are suggested. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00977938.
Resumo:
In a multi-level stakeholder approach the international level is of primordial importance not only in terms of legal frameworks, but also in terms of scientific analysis of the needs, options and constraints, as well as related to monitoring and evaluation systems. The Working Group on 'International Actions for the Sustainable Use of Soils' (IASUS) of the International Union of Soil Science (IUSS) identified a number of issues and measures in preparation of the 17thWorld Congress of Soil Science held in Bangkok, Thailand, in August 2002, and prepared a resolution in support of a 'global agenda for the sustainable use of soils', which was adopted on 21st August 2002 on the closing day of the congress.
Resumo:
Importance: Although rheumatic heart disease has been nearly eradicated in high-income countries, 3 in 4 children grow up in parts of the world where it is still endemic. Objectives: To determine the prevalence of clinically silent and manifest rheumatic heart disease as a function of age, sex, and socioeconomic status and to estimate age-specific incidence. Design, Setting, and Participants: In this school-based cross-sectional study with cluster sampling, 26 schools in the Sunsari district in Eastern Nepal with 5467 eligible children 5 to 15 years of age were randomly selected from 595 registered schools. After exclusion of 289 children, 5178 children were enrolled in the present study from December 12, 2012, through September 12, 2014. Data analysis was performed from October 1, 2014, to April 15, 2015. Exposures: Demographic and socioeconomic characteristics were acquired in a standardized interview by means of a questionnaire customized to the age of the children. A focused medical history was followed by a brief physical examination. Cardiac auscultation and transthoracic echocardiography were performed by 2 independent physicians. Main Outcomes and Measures: Rheumatic heart disease according to the World Heart Federation criteria. Results: The median age of the 5178 children enrolled in the study was 10 years (interquartile range, 8-13 years), and 2503 (48.3%) were female. The prevalence of borderline or definite rheumatic heart disease was 10.2 (95% CI, 7.5-13.0) per 1000 children and increased with advancing age from 5.5 (95% CI, 3.5-7.5) per 1000 children 5 years of age to 16.0 (95% CI, 14.9-17.0) in children 15 years of age, whereas the mean incidence remained stable at 1.1 per 1000 children per year. Children with rheumatic heart disease were older than children without rheumatic heart disease (median age [interquartile range], 11 [9-14] years vs 10 [8-13] years; P = .03), more commonly female (34 [64.2%] vs 2469 [48.2%]; P = .02), and more frequently went to governmental schools (40 [75.5%] vs 2792 [54.5%]; P = .002). Silent disease (n = 44) was 5 times more common than manifest disease (n = 9). Conclusions and Relevance: Rheumatic heart disease affects 1 in 100 schoolchildren in Eastern Nepal, is primarily clinically silent, and may be more common among girls. The overall prevalence and the ratio of manifest to subclinical disease increase with advancing age, whereas the incidence remains stable at 1.1 per 1000 children per year. Early detection of silent disease may help prevent progression to severe valvular damage.
Resumo:
Prevalence and mortality rates for non-insulin dependent (Type II) diabetes mellitus are two to five times greater in the Mexican-American population than in the general U.S. population. Diabetes has been associated with risk factors which increases the likelihood of developing atherosclerosis. Relatives of noninsulin dependent diabetic probands are at increased risk of developing diabetes; and offspring of diabetic parents are at greater risk. Elevation in risk factor levels clearly began to develop prior to adulthood. Therefore an excess of these risk factors are expected among offspring and relatives of diabetics.^ The purposes of this study were to describe levels of risk factors within a group of Mexican American children who were identified through a diabetic proband, and to determine if there was a relationship between risk factor levels and heritability. Data from three hundred and seventy-six children and adolescents between the ages of 7 and 13 years, inclusively, were analyzed. These children were identified through a diabetic proband who participated in the Diabetes Alert Study. This study group was compared to a representative sample of Mexican American children, who participated in the Hispanic Health and Nutrition Examination Survey.^ For females, there were statistically significant associations between upper body fat distribution and increased systolic and diastolic blood pressure after adjusting for age and measures of fatness. Body mass index was positively related to and explained a significant portion of the variability in systolic blood pressure, total cholesterol, and HDL-cholesterol, for males only. No relationship was found between degree of relationship to the diabetic proband and risk factor levels. The most likely explanations for this were insufficient sample size to detect differences, and/or incomplete ascertainment of pedigree information.^ Although there was evidence that these Mexican American children are fatter and have more central fat distribution than non-Hispanic children, there is no evidence of increased risk for diabetes and/or cardiovascular disease at these ages. ^
Resumo:
Un nuevo ímpetu por la recolección de información parece estar ganando terreno, tal vez heredero del "movimiento de los indicadores sociales". Este movimiento fue un legado de quienes apoyaban la cuantificación en las Ciencias Sociales, en la medida que los números se creían objetivos y científicos per se y la información se consideraba un derecho ciudadano. El estudio de la sociedad en sus múltiples dimensiones ha estimulado la búsqueda y construcción de indicadores e índices estadísticos. Sin embargo, el interés por contar con mejores formas de estudiar el progreso social ha conducido, muchas veces, a un uso inadecuado de indicadores y medidas. El PBI, por ejemplo, ha sido frecuentemente tomado como un indicador de bienestar. Pero la carencia de un marco conceptual para el estudio del bienestar no es el único problema, ni siquiera el más importante. Una significación similar -o aun mayor- la tiene la escasa competencia estadística de periodistas, hacedores de políticas públicas y -en general- la ciudadanía. En conjunto, estos elementos coadyuvan a limitar el uso de los datos en el debate público. En este artículo abordo el cambio desde la aritmética política hacia los modernos reportes sociales (par. 1); el éxito de la cuantificación en la administración del Estado (par. 2); los usos inadecuados de la cuantificación (par. 3); la actual no utilización de la cuantificación y la búsqueda de condiciones contextuales que interfieren en la transformación de la información en conocimiento (par. 4)
Resumo:
Un nuevo ímpetu por la recolección de información parece estar ganando terreno, tal vez heredero del "movimiento de los indicadores sociales". Este movimiento fue un legado de quienes apoyaban la cuantificación en las Ciencias Sociales, en la medida que los números se creían objetivos y científicos per se y la información se consideraba un derecho ciudadano. El estudio de la sociedad en sus múltiples dimensiones ha estimulado la búsqueda y construcción de indicadores e índices estadísticos. Sin embargo, el interés por contar con mejores formas de estudiar el progreso social ha conducido, muchas veces, a un uso inadecuado de indicadores y medidas. El PBI, por ejemplo, ha sido frecuentemente tomado como un indicador de bienestar. Pero la carencia de un marco conceptual para el estudio del bienestar no es el único problema, ni siquiera el más importante. Una significación similar -o aun mayor- la tiene la escasa competencia estadística de periodistas, hacedores de políticas públicas y -en general- la ciudadanía. En conjunto, estos elementos coadyuvan a limitar el uso de los datos en el debate público. En este artículo abordo el cambio desde la aritmética política hacia los modernos reportes sociales (par. 1); el éxito de la cuantificación en la administración del Estado (par. 2); los usos inadecuados de la cuantificación (par. 3); la actual no utilización de la cuantificación y la búsqueda de condiciones contextuales que interfieren en la transformación de la información en conocimiento (par. 4)
Resumo:
Un nuevo ímpetu por la recolección de información parece estar ganando terreno, tal vez heredero del "movimiento de los indicadores sociales". Este movimiento fue un legado de quienes apoyaban la cuantificación en las Ciencias Sociales, en la medida que los números se creían objetivos y científicos per se y la información se consideraba un derecho ciudadano. El estudio de la sociedad en sus múltiples dimensiones ha estimulado la búsqueda y construcción de indicadores e índices estadísticos. Sin embargo, el interés por contar con mejores formas de estudiar el progreso social ha conducido, muchas veces, a un uso inadecuado de indicadores y medidas. El PBI, por ejemplo, ha sido frecuentemente tomado como un indicador de bienestar. Pero la carencia de un marco conceptual para el estudio del bienestar no es el único problema, ni siquiera el más importante. Una significación similar -o aun mayor- la tiene la escasa competencia estadística de periodistas, hacedores de políticas públicas y -en general- la ciudadanía. En conjunto, estos elementos coadyuvan a limitar el uso de los datos en el debate público. En este artículo abordo el cambio desde la aritmética política hacia los modernos reportes sociales (par. 1); el éxito de la cuantificación en la administración del Estado (par. 2); los usos inadecuados de la cuantificación (par. 3); la actual no utilización de la cuantificación y la búsqueda de condiciones contextuales que interfieren en la transformación de la información en conocimiento (par. 4)
Resumo:
Biological activity introduces variability in element incorporation during calcification and thereby decreases the precision and accuracy when using foraminifera as geochemical proxies in paleoceanography. This so-called 'vital effect' consists of organismal and environmental components. Whereas organismal effects include uptake of ions from seawater and subsequent processing upon calcification, environmental effects include migration- and seasonality-induced differences. Triggering asexual reproduction and culturing juveniles of the benthic foraminifer Ammonia tepida under constant, controlled conditions allow environmental and genetic variability to be removed and the effect of cell-physiological controls on element incorporation to be quantified. Three groups of clones were cultured under constant conditions while determining their growth rates, size-normalized weights and single-chamber Mg/Ca and Sr/Ca using laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). Results show no detectable ontogenetic control on the incorporation of these elements in the species studied here. Despite constant culturing conditions, Mg/Ca varies by a factor of similar to 4 within an individual foraminifer while intra-individual Sr/Ca varies by only a factor of 1.6. Differences between clone groups were similar to the intra-clone group variability in element composition, suggesting that any genetic differences between the clone-groups studied here do not affect trace element partitioning. Instead, variability in Mg/Ca appears to be inherent to the process of bio-calcification itself. The variability in Mg/Ca between chambers shows that measurements of at least 6 different chambers are required to determine the mean Mg/Ca value for a cultured foraminiferal test with a precision of <= 10%
Resumo:
In order to illustrate how the input-output approach can be used to explore various aspects of a country's participation in GVCs, this paper applies indicators derived from the concept of trade in value-added (TiVA) to the case of Costa Rica. We intend to provide developing countries that seek to foster GVC-driven structural transformation with an example that demonstrates an effective way to measure progress. The analysis presented in this paper makes use of an International Input-Output Table (IIOT) that was constructed by including Costa Rica's first Input-Output Table (IOT) into an existing IIOT. The TiVA indicator has been used to compare and contrast import flows, export flows and bilateral trade balances in terms of gross trade and trade in value-added. The country's comparative advantage is discussed based on a TiVA-related indicator of revealed comparative advantage. The paper also decomposes the domestic content of value added in each sector and measures the degree of fragmentation in the value chains in which Costa Rica participates, highlighting the partner countries that add the most value.
Resumo:
The active initiative taken by Russian President Vladamir Putin by bombarding the antigovernment forces in Syria at the end of September 2015 startled the world by its precalculated boldness. Russian intervention has radically changed the dynamic of the war by empowering the Syrian government of Bashar Assad, and has resulted in a ceasefire agreement which starts on 27th February 2016, led by Russia and the US. No one can predict at present the next stage of conflicts in Syria or whether it will result in a positive solution to the tragic wars there. However, there is no denying the fact that Russia has played an important role in the development of the game. This paper analyzes the motivations of Putin in intervening in the Syrian crisis and the factors which have enabled Russia to play an enlarged role in the Middle East, seemingly beyond its objective capabilities. Legacies of international networks built during the Soviet period; shrewd tactics in making use of the inconsistency and vacillation of US policies, particularly towards the Middle East; its historical experience of interaction with the Muslim cultures, including domestic ones; its geopolitical perception of world politics, and the export of energy resources and military weapons as tools of diplomacy are some of the factors which explain Russian behavior. At the same time, the personal leadership and accumulated experience of President Putin in formulating Russian diplomacy and in manipulating different issues in a combined policy should be taken into account. His initiative in Syria succeeded to some extent in turning world attention away from the Ukrainian issue, aimed at changing the present sanctions imposed by the West. Another phenomenon to be noted in the international arena is the newly developed mutual interaction between Russia and the Arab countries in the Gulf. Frequent visits to Russia by autocratic leaders, including kings, emirsand princes do not always reflect a shared common interest between Russia and the Arab leaders. On the contrary, in spite of sharp and fundamental differences in their attitude toward the issues related to Syria, Iran and Yemen, the Arab leaders find it necessary to communicate with Russia and to know Russia’s expected strategies and intentions towards the Middle East, apart from its oil and gas policies. The Iran deal on the nuclear issue in July 2015 may have been a factor behind the phenomena.
Resumo:
Locally weighted regression is a technique that predicts the response for new data items from their neighbors in the training data set, where closer data items are assigned higher weights in the prediction. However, the original method may suffer from overfitting and fail to select the relevant variables. In this paper we propose combining a regularization approach with locally weighted regression to achieve sparse models. Specifically, the lasso is a shrinkage and selection method for linear regression. We present an algorithm that embeds lasso in an iterative procedure that alternatively computes weights and performs lasso-wise regression. The algorithm is tested on three synthetic scenarios and two real data sets. Results show that the proposed method outperforms linear and local models for several kinds of scenarios
Resumo:
Systems of Systems (SoS) present challenging features and existing tools result often inadequate for their analysis, especially for heteregeneous networked infrastructures. Most accident scenarios in networked systems cannot be addressed by a simplistic black or white (i.e. functioning or failed) approach. Slow deviations from nominal operation conditions may cause degraded behaviours that suddenly end up into unexpected malfunctioning, with large portions of the network affected. In this paper,we present a language for modelling networked SoS. The language makes it possible to represent interdependencies of various natures, e.g. technical, organizational and human. The representation of interdependencies is based on control relationships that exchange physical quantities and related information. The language also makes it possible the identification of accident scenarios, by representing the propagation of failure events throughout the network. The results can be used for assessing the effectiveness of those mechanisms and measures that contribute to the overall resilience, both in qualitative and quantitative terms. The presented modelling methodology is general enough to be applied in combination with already existing system analysis techniques, such as risk assessment, dependability and performance evaluation
Resumo:
La propulsión eléctrica constituye hoy una tecnología muy competitiva y de gran proyección de futuro. Dentro de los diversos motores de plasma existentes, el motor de efecto Hall ha adquirido una gran madurez y constituye un medio de propulsión idóneo para un rango amplio de misiones. En la presente Tesis se estudian los motores Hall con geometría convencional y paredes dieléctricas. La compleja interacción entre los múltiples fenómenos físicos presentes hace que sea difícil la simulación del plasma en estos motores. Los modelos híbridos son los que representan un mejor compromiso entre precisión y tiempo de cálculo. Se basan en utilizar un modelo fluido para los electrones y algoritmos de dinámica de partículas PIC (Particle-In- Cell) para los iones y los neutros. Permiten hacer uso de la hipótesis de cuasineutralidad del plasma, a cambio de resolver separadamente las capas límite (o vainas) que se forman en torno a las paredes de la cámara. Partiendo de un código híbrido existente, llamado HPHall-2, el objetivo de la Tesis doctoral ha sido el desarrollo de un código híbrido avanzado que mejorara la simulación de la descarga de plasma en un motor de efecto Hall. Las actualizaciones y mejoras realizadas en las diferentes partes que componen el código comprenden tanto aspectos teóricos como numéricos. Fruto de la extensa revisión de la algoritmia del código HPHall-2 se han conseguido reducir los errores de precisión un orden de magnitud, y se ha incrementado notablemente su consistencia y robustez, permitiendo la simulación del motor en un amplio rango de condiciones. Algunos aspectos relevantes a destacar en el subcódigo de partículas son: la implementación de un nuevo algoritmo de pesado que permite determinar de forma más precisa el flujo de las magnitudes del plasma; la implementación de un nuevo algoritmo de control de población, que permite tener suficiente número de partículas cerca de las paredes de la cámara, donde los gradientes son mayores y las condiciones de cálculo son más críticas; las mejoras en los balances de masa y energía; y un mejor cálculo del campo eléctrico en una malla no uniforme. Merece especial atención el cumplimiento de la condición de Bohm en el borde de vaina, que en los códigos híbridos representa una condición de contorno necesaria para obtener una solución consistente con el modelo de interacción plasma-pared, y que en HPHall-2 aún no se había resuelto satisfactoriamente. En esta Tesis se ha implementado el criterio cinético de Bohm para una población de iones con diferentes cargas eléctricas y una gran dispersión de velocidades. En el código, el cumplimiento de la condición cinética de Bohm se consigue por medio de un algoritmo que introduce una fina capa de aceleración nocolisional adyacente a la vaina y mide adecuadamente el flujo de partículas en el espacio y en el tiempo. Las mejoras realizadas en el subcódigo de electrones incrementan la capacidad de simulación del código, especialmente en la región aguas abajo del motor, donde se simula la neutralización del chorro del plasma por medio de un modelo de cátodo volumétrico. Sin abordar el estudio detallado de la turbulencia del plasma, se implementan modelos sencillos de ajuste de la difusión anómala de Bohm, que permiten reproducir los valores experimentales del potencial y la temperatura del plasma, así como la corriente de descarga del motor. En cuanto a los aspectos teóricos, se hace especial énfasis en la interacción plasma-pared y en la dinámica de los electrones secundarios libres en el interior del plasma, cuestiones que representan hoy en día problemas abiertos en la simulación de los motores Hall. Los nuevos modelos desarrollados buscan una imagen más fiel a la realidad. Así, se implementa el modelo de vaina de termalización parcial, que considera una función de distribución no-Maxwelliana para los electrones primarios y contabiliza unas pérdidas energéticas más cercanas a la realidad. Respecto a los electrones secundarios, se realiza un estudio cinético simplificado para evaluar su grado de confinamiento en el plasma, y mediante un modelo fluido en el límite no-colisional, se determinan las densidades y energías de los electrones secundarios libres, así como su posible efecto en la ionización. El resultado obtenido muestra que los electrones secundarios se pierden en las paredes rápidamente, por lo que su efecto en el plasma es despreciable, no así en las vainas, donde determinan el salto de potencial. Por último, el trabajo teórico y de simulación numérica se complementa con el trabajo experimental realizado en el Pnnceton Plasma Physics Laboratory, en el que se analiza el interesante transitorio inicial que experimenta el motor en el proceso de arranque. Del estudio se extrae que la presencia de gases residuales adheridos a las paredes juegan un papel relevante, y se recomienda, en general, la purga completa del motor antes del modo normal de operación. El resultado final de la investigación muestra que el código híbrido desarrollado representa una buena herramienta de simulación de un motor Hall. Reproduce adecuadamente la física del motor, proporcionando resultados similares a los experimentales, y demuestra ser un buen laboratorio numérico para estudiar el plasma en el interior del motor. Abstract Electric propulsion is today a very competitive technology and has a great projection into the future. Among the various existing plasma thrusters, the Hall effect thruster has acquired a considerable maturity and constitutes an ideal means of propulsion for a wide range of missions. In the present Thesis only Hall thrusters with conventional geometry and dielectric walls are studied. The complex interaction between multiple physical phenomena makes difficult the plasma simulation in these engines. Hybrid models are those representing a better compromise between precision and computational cost. They use a fluid model for electrons and Particle-In-Cell (PIC) algorithms for ions and neutrals. The hypothesis of plasma quasineutrality is invoked, which requires to solve separately the sheaths formed around the chamber walls. On the basis of an existing hybrid code, called HPHall-2, the aim of this doctoral Thesis is to develop an advanced hybrid code that better simulates the plasma discharge in a Hall effect thruster. Updates and improvements of the code include both theoretical and numerical issues. The extensive revision of the algorithms has succeeded in reducing the accuracy errors in one order of magnitude, and the consistency and robustness of the code have been notably increased, allowing the simulation of the thruster in a wide range of conditions. The most relevant achievements related to the particle subcode are: the implementation of a new weighing algorithm that determines more accurately the plasma flux magnitudes; the implementation of a new algorithm to control the particle population, assuring enough number of particles near the chamber walls, where there are strong gradients and the conditions to perform good computations are more critical; improvements in the mass and energy balances; and a new algorithm to compute the electric field in a non-uniform mesh. It deserves special attention the fulfilment of the Bohm condition at the edge of the sheath, which represents a boundary condition necessary to match consistently the hybrid code solution with the plasma-wall interaction, and remained as a question unsatisfactory solved in the HPHall-2 code. In this Thesis, the kinetic Bohm criterion has been implemented for an ion particle population with different electric charges and a large dispersion in their velocities. In the code, the fulfilment of the kinetic Bohm condition is accomplished by an algorithm that introduces a thin non-collisional layer next to the sheaths, producing the ion acceleration, and measures properly the flux of particles in time and space. The improvements made in the electron subcode increase the code simulation capabilities, specially in the region downstream of the thruster, where the neutralization of the plasma jet is simulated using a volumetric cathode model. Without addressing the detailed study of the plasma turbulence, simple models for a parametric adjustment of the anomalous Bohm difussion are implemented in the code. They allow to reproduce the experimental values of the plasma potential and the electron temperature, as well as the discharge current of the thruster. Regarding the theoretical issues, special emphasis has been made in the plasma-wall interaction of the thruster and in the dynamics of free secondary electrons within the plasma, questions that still remain unsolved in the simulation of Hall thrusters. The new developed models look for results closer to reality, such as the partial thermalization sheath model, that assumes a non-Maxwellian distribution functions for primary electrons, and better computes the energy losses at the walls. The evaluation of secondary electrons confinement within the chamber is addressed by a simplified kinetic study; and using a collisionless fluid model, the densities and energies of free secondary electrons are computed, as well as their effect on the plasma ionization. Simulations show that secondary electrons are quickly lost at walls, with a negligible effect in the bulk of the plasma, but they determine the potential fall at sheaths. Finally, numerical simulation and theoretical work is complemented by the experimental work carried out at the Princeton Plasma Physics Laboratory, devoted to analyze the interesting transitional regime experienced by the thruster in the startup process. It is concluded that the gas impurities adhered to the thruster walls play a relevant role in the transitional regime and, as a general recomendation, a complete purge of the thruster before starting its normal mode of operation it is suggested. The final result of the research conducted in this Thesis shows that the developed code represents a good tool for the simulation of Hall thrusters. The code reproduces properly the physics of the thruster, with results similar to the experimental ones, and represents a good numerical laboratory to study the plasma inside the thruster.