872 resultados para Self-sustainable Successful Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human experimental pain models require standardized stimulation and quantitative assessment of the evoked responses. This approach can be applied to healthy volunteers and pain patients before and after pharmacological interventions. Standardized stimuli of different modalities (ie, mechanical, chemical, thermal or electrical) can be applied to the skin, muscles and viscera for a differentiated and comprehensive assessment of various pain pathways and mechanisms. Using a multi-modal, multi-tissue approach, new and existing analgesic drugs can be profiled by their modulation of specific biomarkers. It has been shown that biomarkers, for example, those related to the central integration of repetitive nociceptive stimuli, can predict efficacy of a given drug in neuropathic pain conditions. Human experimental pain models can bridge animal and clinical pain research, and act as translational research providing new possibilities for designing successful clinical trials. Proof-of-concept studies provide cheap, fast and reliable information on dose-efficacy relationships and how pain sensed in the skin, muscles and viscera are inhibited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Antiretroviral therapy (ART) may induce metabolic changes and increase the risk of coronary heart disease (CHD). Based on a health care system approach, we investigated predictors for normalization of dyslipidemia in HIV-infected individuals receiving ART. METHOD: Individuals included in the study were registered in the Swiss HIV Cohort Study (SHCS), had dyslipidemia but were not on lipid-lowering medication, were on potent ART for >or= 3 months, and had >or= 2 follow-up visits. Dyslipidemia was defined as two consecutive total cholesterol (TC) values above recommended levels. Predictors of achieving treatment goals for TC were assessed using Cox models. RESULTS: Analysis included 958 individuals with median followup of 2.3 years (IQR 1.2-4.0). 454 patients (47.4%) achieved TC treatment goals. In adjusted analyses, variables significantly associated with a lower hazard of reaching TC treatment goals were as follows: older age (compared to 18-37 year olds: hazard ratio [HR] 0.62 for 45-52 year olds, 95% CI 0.47-0.82; HR 0.40 for 53-85, 95% CI 0.29-0.54), diabetes (HR 0.39, 95% CI 0.26-0.59), history of coronary heart disease (HR 0.27, 95% CI 0.10-0.71), higher baseline TC (HR 0.78, 95% CI 0.71-0.85), baseline triple nucleoside regimen (HR 0.12 compared to PI-only regimen, 95% CI 0.07-0.21), longer time on PI-only regimen (HR 0.39, 95% CI 0.33-0.46), longer time on NNRTI only regimen (HR 0.35, 95% CI 0.29-0.43), and longer time on PI/NNRTI regimen (HR 0.34, 95% CI 0.26-0.43). Switching ART regimen when viral load was undetectable was associated with a higher hazard of reaching TC treatment goals (HR 1.48, 95% CI 1.14-1.91). CONCLUSION: In SHCS participants on ART, several ART-related and not ART-related epidemiological factors were associated with insufficient control of dyslipidemia. Control of dyslipidemia in ART recipients must be further improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: We sought to determine both the procedural performance and safety of percutaneous implantation of the second (21-French [F])- and third (18-F)-generation CoreValve aortic valve prosthesis (CoreValve Inc., Irvine, California). BACKGROUND: Percutaneous aortic valve replacement represents an emerging alternative therapy for high-risk and inoperable patients with severe symptomatic aortic valve stenosis. METHODS: Patients with: 1) symptomatic, severe aortic valve stenosis (area <1 cm2); 2) age > or =80 years with a logistic EuroSCORE > or =20% (21-F group) or age > or =75 years with a logistic EuroSCORE > or =15% (18-F group); or 3) age > or =65 years plus additional prespecified risk factors were included. Introduction of the 18-F device enabled the transition from a multidisciplinary approach involving general anesthesia, surgical cut-down, and cardiopulmonary bypass to a truly percutaneous approach under local anesthesia without hemodynamic support. RESULTS: A total of 86 patients (21-F, n = 50; 18-F, n = 36) with a mean valve area of 0.66 +/- 0.19 cm2 (21-F) and 0.54 +/- 0.15 cm2 (18-F), a mean age of 81.3 +/- 5.2 years (21-F) and 83.4 +/- 6.7 years (18-F), and a mean logistic EuroSCORE of 23.4 +/- 13.5% (21-F) and 19.1 +/- 11.1% (18-F) were recruited. Acute device success was 88%. Successful device implantation resulted in a marked reduction of aortic transvalvular gradients (mean pre 43.7 mm Hg vs. post 9.0 mm Hg, p < 0.001) with aortic regurgitation grade remaining unchanged. Acute procedural success rate was 74% (21-F: 78%; 18-F: 69%). Procedural mortality was 6%. Overall 30-day mortality rate was 12%; the combined rate of death, stroke, and myocardial infarction was 22%. CONCLUSIONS: Treatment of severe aortic valve stenosis in high-risk patients with percutaneous implantation of the CoreValve prosthesis is feasible and associated with a lower mortality rate than predicted by risk algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comments on an article by Kashima et al. (see record 2007-10111-001). In their target article Kashima and colleagues try to show how a connectionist model conceptualization of the self is best suited to capture the self's temporal and socio-culturally contextualized nature. They propose a new model and to support this model, the authors conduct computer simulations of psychological phenomena whose importance for the self has long been clear, even if not formally modeled, such as imitation, and learning of sequence and narrative. As explicated when we advocated connectionist models as a metaphor for self in Mischel and Morf (2003), we fully endorse the utility of such a metaphor, as these models have some of the processing characteristics necessary for capturing key aspects and functions of a dynamic cognitive-affective self-system. As elaborated in that chapter, we see as their principal strength that connectionist models can take account of multiple simultaneous processes without invoking a single central control. All outputs reflect a distributed pattern of activation across a large number of simple processing units, the nature of which depends on (and changes with) the connection weights between the links and the satisfaction of mutual constraints across these links (Rummelhart & McClelland, 1986). This allows a simple account for why certain input features will at times predominate, while others take over on other occasions. (PsycINFO Database Record (c) 2008 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a clinical decision tree based on knowledge about risks and reported outcomes of therapy is a necessity for successful planning and outcome of periodontal therapy. This requires a well-founded knowledge of the disease entity and a broad knowledge of how different risk conditions attribute to periodontitis. The infectious etiology, a complex immune response, and influence from a large number of co-factors are challenging conditions in clinical periodontal risk assessment. The difficult relationship between independent and dependent risk conditions paired with limited information on periodontitis prevalence adds to difficulties in periodontal risk assessment. The current information on periodontitis risk attributed to smoking habits, socio-economic conditions, general health and subjects' self-perception of health, is not comprehensive, and this contributes to limited success in periodontal risk assessment. New models for risk analysis have been advocated. Their utility for the estimation of periodontal risk assessment and prognosis should be tested. The present review addresses several of these issues associated with periodontal risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most criticism about homeopathy concerns the lack of a scientific basis and theoretical models. In order to be accepted as a valid part of medical practice, a wellstructured research strategy for homeopathy is needed. This is often hampered by methodological problems as well as by gross underinvestment in the required academic resources. Fundamental research could make important contributions to our understanding of the homeopathic and high dilutions mechanisms of action. Since the pioneering works of Kolisko on wheat germination (Kolisko, 1923) and Junker on growth of microorganisms (paramecium, yeast, fungi) (Junker, 1928), a number of experiments have been performed either with healthy organisms (various physiological aspects of growth) or with artificially diseased organisms, which may react more markedly to homeopathic treatments than healthy ones. In the latter case, the preliminary stress may be either abiotic, e.g. heavy metals, or biotic, e.g. fungal and viral pathogens or nematode infection. Research has also been carried out into the applicability of homeopathic principles to crop growth and disease control (agrohomeopathy): because of the extreme dilutions used, the environmental impact is low and such treatments are well suited to the holistic approach of sustainable agriculture (Betti et al., 2006). Unfortunately, as Scofield reported in an extensive critical review (Scofield, 1984), there is little firm evidence to support the reliability of the reported results, due to poor experimental methodology and inadequate statistical analysis. Moreover, since there is no agricultural homeopathic pharmacopoeia, much work is required to find suitable remedies, potencies and dose levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The aim of this study was to explore the predictive value of longitudinal self-reported adherence data on viral rebound. METHODS: Individuals in the Swiss HIV Cohort Study on combined antiretroviral therapy (cART) with RNA <50 copies/ml over the previous 3 months and who were interviewed about adherence at least once prior to 1 March 2007 were eligible. Adherence was defined in terms of missed doses of cART (0, 1, 2 or >2) in the previous 28 days. Viral rebound was defined as RNA >500 copies/ml. Cox regression models with time-independent and -dependent covariates were used to evaluate time to viral rebound. RESULTS: A total of 2,664 individuals and 15,530 visits were included. Across all visits, missing doses were reported as follows: 1 dose 14.7%, 2 doses 5.1%, >2 doses 3.8% taking <95% of doses 4.5% and missing > or =2 consecutive doses 3.2%. In total, 308 (11.6%) patients experienced viral rebound. After controlling for confounding variables, self-reported non-adherence remained significantly associated with the rate of occurrence of viral rebound (compared with zero missed doses: 1 dose, hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.72-1.48; 2 doses, HR 2.17, 95% CI 1.46-3.25; >2 doses, HR 3.66, 95% CI 2.50-5.34). Several variables significantly associated with an increased risk of viral rebound irrespective of adherence were identified: being on a protease inhibitor or triple nucleoside regimen (compared with a non-nucleoside reverse transcriptase inhibitor), >5 previous cART regimens, seeing a less-experienced physician, taking co-medication, and a shorter time virally suppressed. CONCLUSIONS: A simple self-report adherence questionnaire repeatedly administered provides a sensitive measure of non-adherence that predicts viral rebound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biofuels are an increasingly important component of worldwide energy supply. This research aims to understand the pathways and impacts of biofuels production, and to improve these processes to make them more efficient. In Chapter 2, a life cycle assessment (LCA) is presented for cellulosic ethanol production from five potential feedstocks of regional importance to the upper Midwest - hybrid poplar, hybrid willow, switchgrass, diverse prairie grasses, and logging residues - according to the requirements of Renewable Fuel Standard (RFS). Direct land use change emissions are included for the conversion of abandoned agricultural land to feedstock production, and computer models of the conversion process are used in order to determine the effect of varying biomass composition on overall life cycle impacts. All scenarios analyzed here result in greater than 60% reduction in greenhouse gas emissions relative to petroleum gasoline. Land use change effects were found to contribute significantly to the overall emissions for the first 20 years after plantation establishment. Chapter 3 is an investigation of the effects of biomass mixtures on overall sugar recovery from the combined processes of dilute acid pretreatment and enzymatic hydrolysis. Biomass mixtures studied were aspen, a hardwood species well suited to biochemical processing; balsam, a high-lignin softwood species, and switchgrass, an herbaceous energy crop with high ash content. A matrix of three different dilute acid pretreatment severities and three different enzyme loading levels was used to characterize interactions between pretreatment and enzymatic hydrolysis. Maximum glucose yield for any species was 70% oftheoretical for switchgrass, and maximum xylose yield was 99.7% of theoretical for aspen. Supplemental β-glucosidase increased glucose yield from enzymatic hydrolysis by an average of 15%, and total sugar recoveries for mixtures could be predicted to within 4% by linear interpolation of the pure species results. Chapter 4 is an evaluation of the potential for producing Trichoderma reesei cellulose hydrolases in the Kluyveromyces lactis yeast expression system. The exoglucanases Cel6A and Cel7A, and the endoglucanase Cel7B were inserted separately into the K. lactis and the enzymes were analyzed for activity on various substrates. Recombinant Cel7B was found to be active on carboxymethyl cellulose and Avicel powdered cellulose substrates. Recombinant Cel6A was also found to be active on Avicel. Recombinant Cel7A was produced, but no enzymatic activity was detected on any substrate. Chapter 5 presents a new method for enzyme improvement studies using enzyme co-expression and yeast growth rate measurements as a potential high-throughput expression and screening system in K. lactis yeast. Two different K. lactis strains were evaluated for their usefulness in growth screening studies, one wild-type strain and one strain which has had the main galactose metabolic pathway disabled. Sequential transformation and co-expression of the exoglucanase Cel6A and endoglucanase Cel7B was performed, and improved hydrolysis rates on Avicel were detectable in the cell culture supernatant. Future work should focus on hydrolysis of natural substrates, developing the growth screening method, and utilizing the K. lactis expression system for directed evolution of enzymes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The association between aortic valve disease and coronary atherosclerosis is common. In the recent era of percutaneous aortic valve replacement (PAVR), there is little experience with coronary artery intervention after valve implantation. CASE REPORT: To our knowledge, this is the first case of successful percutaneous coronary intervention after implantation of a CoreValve percutaneous aortic valve. We report a case of a 79-year-old female patient who underwent successful coronary artery intervention few months after a CoreValve's percutaneous implantation for severe aortic valve stenosis. Verifying the position of the used wires (crossing from inside the self expanding frame) is of utmost importance before proceeding to coronary intervention. In this case, crossing the aortic valve, coronary angiography, and multivessel stenting were successfully performed. CONCLUSION: Percutaneous coronary intervention in patients with previous CoreValve is feasible and safe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Alcohol consumption leading to morbidity and mortality affects HIV-infected individuals. Here, we aimed to study self-reported alcohol consumption and to determine its association with adherence to antiretroviral therapy (ART) and HIV surrogate markers. METHODS: Cross-sectional data on daily alcohol consumption from August 2005 to August 2007 were analysed and categorized according to the World Health Organization definition (light, moderate or severe health risk). Multivariate logistic regression models and Pearson's chi(2) statistics were used to test the influence of alcohol use on endpoints. RESULTS: Of 6,323 individuals, 52.3% consumed alcohol less than once a week in the past 6 months. Alcohol intake was deemed light in 39.9%, moderate in 5.0% and severe in 2.8%. Higher alcohol consumption was significantly associated with older age, less education, injection drug use, being in a drug maintenance programme, psychiatric treatment, hepatitis C virus coinfection and with a longer time since diagnosis of HIV. Lower alcohol consumption was found in males, non-Caucasians, individuals currently on ART and those with more ART experience. In patients on ART (n=4,519), missed doses and alcohol consumption were positively correlated (P<0.001). Severe alcohol consumers, who were pretreated with ART, were more often off treatment despite having CD4+ T-cell count <200 cells/microl; however, severe alcohol consumption per se did not delay starting ART. In treated individuals, alcohol consumption was not associated with worse HIV surrogate markers. CONCLUSIONS: Higher alcohol consumption in HIV-infected individuals was associated with several psychosocial and demographic factors, non-adherence to ART and, in pretreated individuals, being off treatment despite low CD4+ T-cell counts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low self-esteem and depression are strongly correlated in cross-sectional studies, yet little is known about their prospective effects on each other. The vulnerability model hypothesizes that low self-esteem serves as a risk factor for depression, whereas the scar model hypothesizes that low self-esteem is an outcome, not a cause, of depression. To test these models, the authors used 2 large longitudinal data sets, each with 4 repeated assessments between the ages of 15 and 21 years and 18 and 21 years, respectively. Cross-lagged regression analyses indicated that low self-esteem predicted subsequent levels of depression, but depression did not predict subsequent levels of self-esteem. These findings held for both men and women and after controlling for content overlap between the self-esteem and depression scales. Thus, the results supported the vulnerability model, but not the scar model, of self-esteem and depression.