832 resultados para Issued-based approach
Resumo:
This project entailed a detailed study and analysis of the literary and musical text of Rimsky-Korsakov's opera The Golden Cockerel, involving source study, philological and musical-historical analysis, etc. Goryachikh studied the process of the creation of the opera, paying particular attention to its genre, that of a character fable, which was innovative for its time. He considered both the opera's folklore sources and the influences of the 'conditional theatre' aesthetics of the early 20th century. This culture-based approach made it possible to trace the numerous sources of the plot and its literary and musical text back to professional and folk cultures of Russia and other countries. A comparative study of the vocabulary, style and poetics of the libretto and the poetic system of Pushkin's Tale of the Golden Cockerel revealed much in common between the two. Goryachikh concluded that The Golden Cockerel was intended to be a specific form of 'dialogue' between the author, the preceding cultural tradition, and that of the time when the opera was written. He proposed a new definition of The Golden Cockerel as an 'inversed opera' and studied its structure and essence, its beginnings in the 'laughing culture' and the deflection of its forms and composition in a cultural language. He identified the constructive technique of Rimsky-Korsakov's writing at each level of musical unity and noted its influence on Stravinsky and Prokoviev, also finding anticipations of musical phenomena of the 20th century. He concluded by formulating a research model of Russian classical opera as cultural text and suggested further uses for it in musicology.
Resumo:
OBJECTIVES: Over the past few years, a considerable increase in complementary and alternative medicine (CAM) has been observed, particularly in primary care. In contrast little is known about the supply of CAM in Swiss hospitals. This study aims at the investigation of amount and structure of CAM activities of Swiss hospitals. MATERIALS AND METHODS: We designed a cross-sectional survey using a 2-step, questionnaire- based approach acquiring overview information form hospital managers in a first questionnaire leading to detailed information on CAM usage at medical department level (head of department). This second questionnaire provides data of physician-based and non-physician-based CAM supply. RESULTS: The size of hospitals was significantly associated with the provision of CAM. 33% of the hospital managers indicated 1 or more medical doctor (MD) using CAM in their hospital compared to 37% of confirmation on department level (Kappa value 0.5). Mostly different CAM methods were applied. Acupuncture was used most frequently. However only 13 hospitals (11%) occupied more than 3 CAM MDs and only 5 hospitals had more than 2 full-time equivalents for MDs. Furthermore, 74.7% of these personnel resources were dedicated for outpatient care. In terms of CAM methods anthroposophic medicine accounted for more than half of the total personnel costs. On the other hand usage of non-physician based CAM accounted for 41% according to hospital managers compared to 64% of CAM usage according to medical departments (Kappa values 0.31). Reflexology of the foot was used most frequently. CONCLUSION: Total supply of CAM in Swiss hospitals is low and concentrates on few hospitals. Acupuncture is the widest spread discipline but anthroposophic medicine spends the most resources. The study shows that a high patient demand for CAM faces low supply in hospitals.
Resumo:
BACKGROUND: Opportunistic screening for genital chlamydia infection is being introduced in England, but evidence for the effectiveness of this approach is lacking. There are insufficient data about young peoples' use of primary care services to determine the potential coverage of opportunistic screening in comparison with a systematic population-based approach. AIM: To estimate use of primary care services by young men and women; to compare potential coverage of opportunistic chlamydia screening with a systematic postal approach. DESIGN OF STUDY: Population based cross-sectional study. SETTING: Twenty-seven general practices around Bristol and Birmingham. METHOD: A random sample of patients aged 16-24 years were posted a chlamydia screening pack. We collected details of face-to-face consultations from general practice records. Survival and person-time methods were used to estimate the cumulative probability of attending general practice in 1 year and the coverage achieved by opportunistic and systematic postal chlamydia screening. RESULTS: Of 12 973 eligible patients, an estimated 60.4% (95% confidence interval [CI] = 58.3 to 62.5%) of men and 75.3% (73.7 to 76.9%) of women aged 16-24 years attended their practice at least once in a 1-year period. During this period, an estimated 21.3% of patients would not attend their general practice but would be reached by postal screening, 9.2% would not receive a postal invitation but would attend their practice, and 11.8% would be missed by both methods. CONCLUSIONS: Opportunistic and population-based approaches to chlamydia screening would both fail to contact a substantial minority of the target group, if used alone. A pragmatic approach combining both strategies might achieve higher coverage.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
Ligament balancing in total knee arthroplasty may have an important influence on joint stability and prosthesis lifetime. In order to provide quantitative information and assistance during ligament balancing, a device that intraoperatively measures knee joint forces and moments was developed. Its performance and surgical advantages were evaluated on six cadaver specimens mounted on a knee joint loading apparatus allowing unconstrained knee motion as well as compression and varus-valgus loading. Four different experiments were performed on each specimen. (1) Knee joints were axially loaded. Comparison between applied and measured compressive forces demonstrated the accuracy and reliability of in situ measurements (1.8N). (2) Assessment of knee stability based on condyle contact forces or varus-valgus moments were compared to the current surgical method (difference of varus-valgus loads causing condyle lift-off). The force-based approach was equivalent to the surgical method while the moment-based, which is considered optimal, showed a tendency of lateral imbalance. (3) To estimate the importance of keeping the patella in its anatomical position during imbalance assessment, the effect of patellar eversion on the mediolateral distribution of tibiofemoral contact forces was measured. One fourth of the contact force induced by the patellar load was shifted to the lateral compartment. (4) The effect of minor and major medial collateral ligament releases was biomechanically quantified. On average, the medial contact force was reduced by 20% and 46%, respectively. Large variation among specimens reflected the difficulty of ligament release and the need for intraoperative force monitoring. This series of experiments thus demonstrated the device's potential to improve ligament balancing and survivorship of total knee arthroplasty.
Resumo:
The purpose of this study is to develop statistical methodology to facilitate indirect estimation of the concentration of antiretroviral drugs and viral loads in the prostate gland and the seminal vesicle. The differences in antiretroviral drug concentrations in these organs may lead to suboptimal concentrations in one gland compared to the other. Suboptimal levels of the antiretroviral drugs will not be able to fully suppress the virus in that gland, lead to a source of sexually transmissible virus and increase the chance of selecting for drug resistant virus. This information may be useful selecting antiretroviral drug regimen that will achieve optimal concentrations in most of male genital tract glands. Using fractionally collected semen ejaculates, Lundquist (1949) measured levels of surrogate markers in each fraction that are uniquely produced by specific male accessory glands. To determine the original glandular concentrations of the surrogate markers, Lundquist solved a simultaneous series of linear equations. This method has several limitations. In particular, it does not yield a unique solution, it does not address measurement error, and it disregards inter-subject variability in the parameters. To cope with these limitations, we developed a mechanistic latent variable model based on the physiology of the male genital tract and surrogate markers. We employ a Bayesian approach and perform a sensitivity analysis with regard to the distributional assumptions on the random effects and priors. The model and Bayesian approach is validated on experimental data where the concentration of a drug should be (biologically) differentially distributed between the two glands. In this example, the Bayesian model-based conclusions are found to be robust to model specification and this hierarchical approach leads to more scientifically valid conclusions than the original methodology. In particular, unlike existing methods, the proposed model based approach was not affected by a common form of outliers.
Resumo:
The goals of the present study were to model the population kinetics of in vivo influx and efflux processes of grepafloxacin at the serum-cerebrospinal fluid (CSF) barrier and to propose a simulation-based approach to optimize the design of dose-finding trials in the meningitis rabbit model. Twenty-nine rabbits with pneumococcal meningitis receiving grepafloxacin at 15 mg/kg of body weight (intravenous administration at 0 h), 30 mg/kg (at 0 h), or 50 mg/kg twice (at 0 and 4 h) were studied. A three-compartment population pharmacokinetic model was fit to the data with the program NONMEM (Nonlinear Mixed Effects Modeling). Passive diffusion clearance (CL(diff)) and active efflux clearance (CL(active)) are transfer kinetic modeling parameters. Influx clearance is assumed to be equal to CL(diff), and efflux clearance is the sum of CL(diff), CL(active), and bulk flow clearance (CL(bulk)). The average influx clearance for the population was 0.0055 ml/min (interindividual variability, 17%). Passive diffusion clearance was greater in rabbits receiving grepafloxacin at 15 mg/kg than in those treated with higher doses (0.0088 versus 0.0034 ml/min). Assuming a CL(bulk) of 0.01 ml/min, CL(active) was estimated to be 0.017 ml/min (11%), and clearance by total efflux was estimated to be 0.032 ml/min. The population kinetic model allows not only to quantify in vivo efflux and influx mechanisms at the serum-CSF barrier but also to analyze the effects of different dose regimens on transfer kinetic parameters in the rabbit meningitis model. The modeling-based approach also provides a tool for the simulation and prediction of various outcomes in which researchers might be interested, which is of great potential in designing dose-finding trials.
Resumo:
For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.
Resumo:
Information on phosphorus bioavailability can provide water quality managers with the support required to target point source and watershed loads contributing most significantly to water quality conditions. This study presents results from a limited sampling program focusing on the five largest sources of total phosphorus to the U.S. waters of the Great Lakes. The work provides validation of the utility of a bioavailability-based approach, confirming that the method is robust and repeatable. Chemical surrogates for bioavailability were shown to hold promise, however further research is needed to address site-to-site and seasonal variability before a universal relationship can be accepted. Recent changes in the relative contribution of P constituents to the total phosphorus analyte and differences in their bioavailability suggest that loading estimates of bioavailable P will need to address all three components (SRP, DOP and PP). A bioavailability approach, taking advantage of chemical surrogate methodologies is recommended as a means of guiding P management in the Great Lakes.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Highway infrastructure plays a significant role in society. The building and upkeep of America’s highways provide society the necessary means of transportation for goods and services needed to develop as a nation. However, as a result of economic and social development, vast amounts of greenhouse gas emissions (GHG) are emitted into the atmosphere contributing to global climate change. In recognizing this, future policies may mandate the monitoring of GHG emissions from public agencies and private industries in order to reduce the effects of global climate change. To effectively reduce these emissions, there must be methods that agencies can use to quantify the GHG emissions associated with constructing and maintaining the nation’s highway infrastructure. Current methods for assessing the impacts of highway infrastructure include methodologies that look at the economic impacts (costs) of constructing and maintaining highway infrastructure over its life cycle. This is known as Life Cycle Cost Analysis (LCCA). With the recognition of global climate change, transportation agencies and contractors are also investigating the environmental impacts that are associated with highway infrastructure construction and rehabilitation. A common tool in doing so is the use of Life Cycle Assessment (LCA). Traditionally, LCA is used to assess the environmental impacts of products or processes. LCA is an emerging concept in highway infrastructure assessment and is now being implemented and applied to transportation systems. This research focuses on life cycle GHG emissions associated with the construction and rehabilitation of highway infrastructure using a LCA approach. Life cycle phases of the highway section include; the material acquisition and extraction, construction and rehabilitation, and service phases. Departing from traditional approaches that tend to use LCA as a way to compare alternative pavement materials or designs based on estimated inventories, this research proposes a shift to a context sensitive process-based approach that uses actual observed construction and performance data to calculate greenhouse gas emissions associated with highway construction and rehabilitation. The goal is to support strategies that reduce long-term environmental impacts. Ultimately, this thesis outlines techniques that can be used to assess GHG emissions associated with construction and rehabilitation operations to support the overall pavement LCA.
Resumo:
This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.
Resumo:
The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.
Resumo:
BACKGROUND: Chronic meningococcemia (CM) is a diagnostic challenge. Skin lesions are frequent but in most cases nonspecific. Polymerase chain reaction (PCR)-based diagnosis has been validated in blood and cerebrospinal fluid for acute Neisseria meningitidis infection, in patients in whom routine microbiologic tests have failed to isolate the bacteria. In 2 patients with CM, we established the diagnosis by a newly developed PCR-based approach performed on skin biopsy specimens. OBSERVATIONS: Two patients presented with fever together with systemic and cutaneous manifestations suggestive of CM. Although findings from blood cultures remained negative, we were able to identify N meningitidis in the skin lesions by a newly developed PCR assay. In 1 patient, an N meningitidis strain of the same serogroup was also isolated from a throat swab specimen. Both patients rapidly improved after appropriate antibiotherapy. CONCLUSIONS: To our knowledge, we report the first cases of CM diagnosed by PCR testing on skin biopsy specimens. It is noteworthy that, although N meningitidis-specific PCR is highly sensitive in blood and cerebrospinal fluid in acute infections, our observations underscore the usefulness of PCR performed on skin lesions for the diagnosis of chronic N meningitidis infections. Whenever possible, this approach should be systematically employed in patients for whom N meningitidis infection cannot be confirmed by routine microbiologic investigations.
Resumo:
Ethyl glucuronide (EtG) is a marker of recent alcohol consumption. For the optimization of the analysis of EtG by CZE with indirect absorbance detection, the use of capillaries with permanent and dynamic wall coatings, the composition of the BGE, and various sample preparation procedures, including dilution with water, ultrafiltration, protein precipitation, and SPE, were investigated. Two validated screening assays for the determination of EtG in human serum, a CZE-based approach and an enzyme immunoassay (EIA), are described. The CZE assay uses a coated capillary, 2,4-dimethylglutaric acid as an internal standard, and a pH 4.65 BGE comprising 9 mM nicotinic acid, epsilon-aminocaproic acid and 10% v/v ACN. Proteins are removed via precipitation with ACN prior to analysis and the LOQ is 0.50 mg/L. The EIA is based upon commercial reagents which are promoted for the determination of urinary EtG. Krebs-Ringer solution containing 5% BSA is used as a calibration matrix. All samples are ultrafiltered prior to analysis of the ultrafiltrate on a Mira Plus analyzer. Assay calibration ranged between 0 and 2 mg/L and the upper reference limit was determined to be 0.05 mg/L. Both assays proved to be suitable for the analysis of samples from different individuals. For EtG levels above 0.50 mg/L, good agreement was observed for the comparison of the results of the two methods.