955 resultados para Institute for Numerical Analysis (U.S.)
Resumo:
The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.
Resumo:
Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.
Resumo:
With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.
Resumo:
This examination of U.S. economic policy directed toward Chile centered on the political and economic changes that occurred within Chile between 1960 and 1988. During this time, U.S. economic policy directed toward Chile was crafted by members of the American government uneasy with Cold War concerns with the most important of which being the spread of Communism throughout the globe. By viewing U.S. policy toward Chile through this Cold War lens, this thesis explores the different ways in which economic policy was used to advance the political and economic goals within not only Chile, but also Latin America as a whole. The Cold Warriors that crafted and enacted these economic policies were motivated by a variety of factors, and influenced by events outside of their control. From President John F. Kennedy to Ronald Reagan, American policymakers utilized economic policy as a means to achieve regional goals. This project sheds light on an understudied section of U.S. foreign policy history by exploring the way that economic policy helped achieve Cold War objectives in the Southern Cone.
Resumo:
This paper evaluates the performance of the most popular power saving mechanisms defined in the IEEE 802.11 standard, namely the Power Save Mode (Legacy-PSM) and the Unscheduled Automatic Power Save Delivery (U-APSD). The assessment comprises a detailed study concerning energy efficiency and capability to guarantee the required Quality of Service (QoS) for a certain application. The results, obtained in the OMNeT++ simulator, showed that U-APSD is more energy efficient than Legacy-PSM without compromising the end-to- end delay. Both U-APSD and Legacy-PSM revealed capability to guarantee the application QoS requirements in all the studied scenarios. However, unlike U-APSD, when Legacy-PSM is used in the presence of QoS demanding applications, all the stations connected to the network through the same access point will consume noticeable additional energy.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Direct and Indirect Measures of Capacity Utilization: A Nonparametric Analysis of U.S. Manufacturing
Resumo:
We measure the capacity output of a firm as the maximum amount producible by a firm given a specific quantity of the quasi-fixed input and an overall expenditure constraint for its choice of variable inputs. We compute this indirect capacity utilization measure for the total manufacturing sector in the US as well as for a number of disaggregated industries, for the period 1970-2001. We find considerable variation in capacity utilization rates both across industries and over years within industries. Our results suggest that the expenditure constraint was binding, especially in periods of high interest rates.
Resumo:
This study of the wholesale electricity market compares the cost-minimizing performance of the auction mechanism currently in place in U.S. markets with the performance of a proposed replacement. The current mechanism chooses an allocation of contracts that minimizes a fictional cost calculated using pay-as-offer pricing. Then suppliers are paid the market clearing price. The proposed mechanism uses the market clearing price in the allocation phase as well as in the payment phase. In concentrated markets, the proposed mechanism outperforms the current mechanism even when strategic behavior by suppliers is taken into account. The advantage of the proposed mechanism increases with increased price competition.
Resumo:
Background. Liver cancer mortality continues to be a significant factor in deaths worldwide and in the U.S., yet there remains a lack of studies on how mortality burden is impacted by racial groups or by heavy alcohol use. This study evaluated the geographic distribution of liver cancer mortality across population groups in Texas and the U.S. over a 24-year period, as well as determining whether alcohol dependence or abuse correlates with mortality rates. ^ Methods. The Spatial Scan Statistic was used to identify regions of excess liver cancer mortality in Texas counties and the U.S. from 1980 to 2003. The statistic was conducted with a spatial cluster size of 50% of the population at risk, and all analyses used publicly available data. Alcohol abuse data by state and ethnicity were extracted from SAMHSA datasets for the study period 2000–2004. ^ Results. The results of the geographic analysis of liver cancer mortality in both Texas and the U.S. indicate that there were four and seven regions, respectively, that were identified as having statistically significant excess mortality rates with elevated relative risks ranging from 1.38–2.07 and 1.05–1.623 (p = 0.001), respectively. ^ Conclusion. This study revealed seven regions of excess mortality of liver cancer mortality across the U.S. and four regions of excess mortality in Texas between 1980–2003, as well as demonstrated a correlation between elevated liver cancer mortality rates and reporting of alcohol dependence among Hispanics and Other populations. ^
Resumo:
Objectives. To investigate procedural gender equity by assessing predisposing, enabling and need predictors of gender differences in annual medical expenditures and utilization among hypertensive individuals in the U.S. Also, to estimate and compare lifetime medical expenditures among hypertensive men and women in the U.S. ^ Data source. 2001-2004 the Medical Expenditure Panel Survey (MEPS);1986-2000 National Health Interview Survey (NHIS) and National Health Interview Survey linked to mortality in the National Death Index through 2002 (2002 NHIS-NDI). ^ Study design. We estimated total medical expenditure using four equations regression model, specific medical expenditures using two equations regression model and utilization using negative binomial regression model. Procedural equity was assessed by applying the Aday et al. theoretical framework. Expenditures were estimated in 2004 dollars. We estimated hypertension-attributable medical expenditure and utilization among men and women. ^ To estimate lifetime expenditures from ages 20 to 85+, we estimated medical expenditures with cross-sectional data and survival with prospective data. The four equations regression model were used to estimate average annual medical expenditures defined as sum of inpatient stay, emergency room visits, outpatient visits, office based visits, and prescription drugs expenditures. Life tables were used to estimate the distribution of life time medical expenditures for hypertensive men and women at different age and factors such as disease incidence, medical technology and health care cost were assumed to be fixed. Both total and hypertension attributable expenditures among men and women were estimated. ^ Data collection. We used the 2001-2004 MEPS household component and medical condition files; the NHIS person and condition files from 1986-1996 and 1997-2000 sample adult files were used; and the 1986-2000 NHIS that were linked to mortality in the 2002 NHIS-NDI. ^ Principal findings. Hypertensive men had significantly less utilization for most measures after controlling predisposing, enabling and need factors than hypertensive women. Similarly, hypertensive men had less prescription drug (-9.3%), office based (-7.2%) and total medical (-4.5%) expenditures than hypertensive women. However, men had more hypertension-attributable medical expenditures and utilization than women. ^ Expected total lifetime expenditure for average life table individuals at age 20, was $188,300 for hypertensive men and $254,910 for hypertensive women. But the lifetime expenditure that could be attributed to hypertension was $88,033 for men and $40,960 for women. ^ Conclusion. Hypertensive women had more utilization and expenditure for most measures than hypertensive men, possibly indicating procedural inequity. However, relatively higher hypertension-attributable health care of men shows more utilization of resources to treat hypertension related diseases among men than women. Similar results were reported in lifetime analyses.^ Key words: gender, medical expenditures, utilization, hypertension-attributable, lifetime expenditure ^
Resumo:
Common endpoints can be divided into two categories. One is dichotomous endpoints which take only fixed values (most of the time two values). The other is continuous endpoints which can be any real number between two specified values. Choices of primary endpoints are critical in clinical trials. If we only use dichotomous endpoints, the power could be underestimated. If only continuous endpoints are chosen, we may not obtain expected sample size due to occurrence of some significant clinical events. Combined endpoints are used in clinical trials to give additional power. However, current combined endpoints or composite endpoints in cardiovascular disease clinical trials or most clinical trials are endpoints that combine either dichotomous endpoints (total mortality + total hospitalization), or continuous endpoints (risk score). Our present work applied U-statistic to combine one dichotomous endpoint and one continuous endpoint, which has three different assessments and to calculate the sample size and test the hypothesis to see if there is any treatment effect. It is especially useful when some patients cannot provide the most precise measurement due to medical contraindication or some personal reasons. Results show that this method has greater power then the analysis using continuous endpoints alone. ^
Perinatal mortality and quality of care at the National Institute of Perinatology: A 3-year analysis
Resumo:
Quality of medical care has been indirectly assessed through the collection of negative outcomes. A preventable death is one that could have been avoided if optimum care had been offered. The general objective of the present project was to analyze the perinatal mortality at the National Institute of Perinatology (located in Mexico City) by social, biological and some available components of quality of care such as avoidability, provider responsibility, and structure and process deficiencies in the delivery of medical care. A Perinatal Mortality Committee data base was utilized. The study population consisted of all singleton perinatal deaths occurring between January 1, 1988 and June 30, 1991 (n = 522). A proportionate study was designed.^ The population studied mostly corresponded to married young adult mothers, who were residents of urban areas, with an educational level of junior high school or more, two to three pregnancies, and intermediate prenatal care. The mean gestational age at birth was 33.4 $\pm$ 3.9 completed weeks and the mean birthweight at birth was 1,791.9 $\pm$ 853.1 grams.^ Thirty-five percent of perinatal deaths were categorized as avoidable. Postnatal infection and premature rupture of membranes were the most frequent primary causes of avoidable perinatal death. The avoidable perinatal mortality rate was 8.7 per 1000 and significantly declined during the study period (p $<$.05). Preventable perinatal mortality aggregated data suggested that at least part of the mortality decline for amenable conditions was due to better medical care.^ Structure deficiencies were present in 35% of avoidable deaths and process deficiencies were present in 79%. Structure deficiencies remained constant over time. Process deficiencies consisted of diagnosis failures (45.8%) and treatment failures (87.3%), they also remained constant through the years. Party responsibility was as follows: Obstetric (35.4%), pediatric (41.4%), institutional (26.5%), and patient (6.6%). Obstetric responsibility significantly increased during the study period (p $<$.05). Pediatric responsibility declined only for newborns less than 1500 g (p $<$.05). Institutional responsibility remained constant.^ Process deficiencies increased the risk for an avoidable death eightfold (confidence interval 1.7-41.4, p $<$.01) and provider responsibility ninety-fivefold (confidence interval 14.8-612.1, p $<$.001), after adjustment for several confounding variables. Perinatal mortality due to prematurity, barotrauma and nosocomial infection, was highly preventable, but not that due to transpartum asphyxia. Once specific deficiencies in the quality of care have been identified, quality assurance actions should begin. ^