934 resultados para Single-commodity capacitated network design problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Phase I research, Iowa Department of Transportation (IDOT) Project HR-214, "Feasibility Study of Strengthening Existing Single Span Steel Beam Concrete Deck Bridges," verified that post-tensioning can be used to provide strengthening of the composite bridges under investigation. Phase II research, reported here, involved the strengthening of two full-scale prototype bridges - one a prototype of the model bridge tested during Phase I and the other larger and skewed. In addition to the field work, Phase II also involved a considerable amount of laboratory work. A literature search revealed that only minimal data existed on the angle-plus-bar shear connectors. Thus, several specimens utilizing angle-plus-bar, as well as channels, studs and high strength bolts as shear connectors were fabricated and tested. To obtain additional shear connector information, the bridge model of Phase I was sawed into four composite concrete slab and steel beam specimens. Two of the resulting specimens were tested with the original shear connection, while the other two specimens had additional shear connectors added before testing. Although orthotropic plate theory was shown in Phase I to predict vertical load distribution in bridge decks and to predict approximate distribution of post-tensioning for right-angle bridges, it was questioned whether the theory could also be used on skewed bridges. Thus, a small plexiglas model was constructed and used in vertical load distribution tests and post-tensioning force distribution tests for verification of the theory. Conclusions of this research are as follows: (1) The capacity of existing shear connectors must be checked as part of a bridge strengthening program. Determination of the concrete deck strength in advance of bridge strengthening is also recommended. (2) The ultimate capacity of angle-plus-bar shear connectors can be computed on the basis of a modified AASHTO channel connector formula and an angle-to-beam weld capacity check. (3) Existing shear connector capacity can be augmented by means of double-nut high strength bolt connectors. (4) Post-tensioning did not significantly affect truck load distribution for right angle or skewed bridges. (5) Approximate post-tensioning and truck load distribution for actual bridges can be predicted by orthotropic plate theory for vertical load; however, the agreement between actual distribution and theoretical distribution is not as close as that measured for the laboratory model in Phase I. (6) The right angle bridge exhibited considerable end restraint at what would be assumed to be simple support. The construction details at bridge abutments seem to be the reason for the restraint. (7) The skewed bridge exhibited more end restraint than the right angle bridge. Both skew effects and construction details at the abutments accounted for the restraint. (8) End restraint in the right angle and skewed bridges reduced tension strains in the steel bridge beams due to truck loading, but also reduced the compression strains caused by post-tensioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unifying objective of Phases I and II of this study was to determine the feasibility of the post-tensioning strengthening method and to implement the technique on two composite bridges in Iowa. Following completion of these two phases, Phase III was undertaken and is documented in this report. The basic objectives of Phase III were further monitoring bridge behavior (both during and after post-tensioning) and developing a practical design methodology for designing the strengthening system under investigation. Specific objectives were: to develop strain and force transducers to facilitate the collection of field data; to investigate further the existence and effects of the end restraint on the post-tensioning process; to determine the amount of post-tensioning force loss that occurred during the time between the initial testing and the retesting of the existing bridges; to determine the significance of any temporary temperature-induced post-tensioning force change; and to develop a simplified design methodology that would incorporate various variables such as span length, angle-of-skew, beam spacing, and concrete strength. Experimental field results obtained during Phases II and III were compared to the theoretical results and to each other. Conclusions from this research are as follows: (1) Strengthening single-span composite bridges by post-tensioning is a viable, economical strengthening technique. (2) Behavior of both bridges was similar to the behavior observed from the bridges during field tests conducted under Phase II. (3) The strain transducers were very accurate at measuring mid-span strain. (4) The force transducers gave excellent results under laboratory conditions, but were found to be less effective when used in actual bridge tests. (5) Loss of post-tensioning force due to temperature effects in any particular steel beam post-tensioning tendon system were found to be small. (6) Loss of post-tensioning force over a two-year period was minimal. (7) Significant end restraint was measured in both bridges, caused primarily by reinforcing steel being continuous from the deck into the abutments. This end restraint reduced the effectiveness of the post-tensioning but also reduced midspan strains due to truck loadings. (8) The SAP IV finite element model is capable of accurately modeling the behavior of a post-tensioned bridge, if guardrails and end restraints are included in the model. (9) Post-tensioning distribution should be separated into distributions for the axial force and moment components of an eccentric post-tensioning force. (10) Skews of 45 deg or less have a minor influence on post-tensioning distribution. (11) For typical Iowa three-beam and four-beam composite bridges, simple regression-derived formulas for force and moment fractions can be used to estimate post-tensioning distribution at midspan. At other locations, a simple linear interpolation gives approximately correct results. (12) A simple analytical model can accurately estimate the flexural strength of an isolated post-tensioned composite beam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Culverts are common means to convey flow through the roadway system for small streams. In general, larger flows and road embankment heights entail the use of multibarrel culverts (a.k.a. multi-box) culverts. Box culverts are generally designed to handle events with a 50-year return period, and therefore convey considerably lower flows much of the time. While there are no issues with conveying high flows, many multi-box culverts in Iowa pose a significant problem related to sedimentation. The highly erosive Iowa soils can easily lead to the situation that some of the barrels can silt-in early after their construction, becoming partially filled with sediment in few years. Silting can reduce considerably the capacity of the culvert to handle larger flow events. Phase I of this Iowa Highway Research Board project (TR-545) led to an innovative solution for preventing sedimentation. The solution was comprehensively investigated through laboratory experiments and numerical modeling aimed at screening design alternatives and testing their hydraulic and sediment conveyance performance. Following this study phase, the Technical Advisory Committee suggested to implement the recommended sediment mitigation design to a field site. The site selected for implementation was a 3-box culvert crossing Willow Creek on IA Hwy 1W in Iowa City. The culvert was constructed in 1981 and the first cleanup was needed in 2000. Phase II of the TR 545 entailed the monitoring of the site with and without the selfcleaning sedimentation structure in place (similarly with the study conducted in laboratory). The first monitoring stage (Sept 2010 to December 2012) was aimed at providing a baseline for the operation of the as-designed culvert. In order to support Phase II research, a cleanup of the IA Hwy 1W culvert was conducted in September 2011. Subsequently, a monitoring program was initiated to document the sedimentation produced by individual and multiple storms propagating through the culvert. The first two years of monitoring showed inception of the sedimentation in the first spring following the cleanup. Sedimentation continued to increase throughout the monitoring program following the depositional patterns observed in the laboratory tests and those documented in the pre-cleaning surveys. The second part of Phase II of the study was aimed at monitoring the constructed self-cleaning structure. Since its construction in December 2012, the culvert site was continuously monitored through systematic observations. The evidence garnered in this phase of the study demonstrates the good performance of the self-cleaning structure in mitigating the sediment deposition at culverts. Besides their beneficial role in sediment mitigation, the designed self-cleaning structures maintain a clean and clear area upstream the culvert, keep a healthy flow through the central barrel offering hydraulic and aquatic habitat similar with that in the undisturbed stream reaches upstream and downstream the culvert. It can be concluded that the proposed self-cleaning structural solution “streamlines” the area upstream the culvert in a way that secures the safety of the culvert structure at high flows while producing much less disturbance in the stream behavior compared with the current constructive approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Aromatase inhibitors provide superior disease control when compared with tamoxifen as adjuvant therapy for postmenopausal women with endocrine-responsive early breast cancer. PURPOSE: To present the design, history, and analytic challenges of the Breast International Group (BIG) 1-98 trial: an international, multicenter, randomized, double-blind, phase-III study comparing the aromatase inhibitor letrozole with tamoxifen in this clinical setting. METHODS: From 1998-2003, BIG 1-98 enrolled 8028 women to receive monotherapy with either tamoxifen or letrozole for 5 years, or sequential therapy of 2 years of one agent followed by 3 years of the other. Randomization to one of four treatment groups permitted two complementary analyses to be conducted several years apart. The first, reported in 2005, provided a head-to-head comparison of letrozole versus tamoxifen. Statistical power was increased by an enriched design, which included patients who were assigned sequential treatments until the time of the treatment switch. The second, reported in late 2008, used a conditional landmark approach to test the hypothesis that switching endocrine agents at approximately 2 years from randomization for patients who are disease-free is superior to continuing with the original agent. RESULTS: The 2005 analysis showed the superiority of letrozole compared with tamoxifen. The patients who were assigned tamoxifen alone were unblinded and offered the opportunity to switch to letrozole. Results from other trials increased the clinical relevance about whether or not to start treatment with letrozole or tamoxifen, and analysis plans were expanded to evaluate sequential versus single-agent strategies from randomization. LIMITATIONS: Due to the unblinding of patients assigned tamoxifen alone, analysis of updated data will require ascertainment of the influence of selective crossover from tamoxifen to letrozole. CONCLUSIONS: BIG 1-98 is an example of an enriched design, involving complementary analyses addressing different questions several years apart, and subject to evolving analytic plans influenced by new data that emerge over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Duchenne muscular dystrophy (DMD) is an X-linked genetic disease, caused by the absence of the dystrophin protein. Although many novel therapies are under development for DMD, there is currently no cure and affected individuals are often confined to a wheelchair by their teens and die in their twenties/thirties. DMD is a rare disease (prevalence <5/10,000). Even the largest countries do not have enough affected patients to rigorously assess novel therapies, unravel genetic complexities, and determine patient outcomes. TREAT-NMD is a worldwide network for neuromuscular diseases that provides an infrastructure to support the delivery of promising new therapies for patients. The harmonized implementation of national and ultimately global patient registries has been central to the success of TREAT-NMD. For the DMD registries within TREAT-NMD, individual countries have chosen to collect patient information in the form of standardized patient registries to increase the overall patient population on which clinical outcomes and new technologies can be assessed. The registries comprise more than 13,500 patients from 31 different countries. Here, we describe how the TREAT-NMD national patient registries for DMD were established. We look at their continued growth and assess how successful they have been at fostering collaboration between academia, patient organizations, and industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single amino acid substitution is the type of protein alteration most related to human diseases. Current studies seek primarily to distinguish neutral mutations from harmful ones. Very few methods offer an explanation of the final prediction result in terms of the probable structural or functional effect on the protein. In this study, we describe the use of three novel parameters to identify experimentally-verified critical residues of the TP53 protein (p53). The first two parameters make use of a surface clustering method to calculate the protein surface area of highly conserved regions or regions with high nonlocal atomic interaction energy (ANOLEA) score. These parameters help identify important functional regions on the surface of a protein. The last parameter involves the use of a new method for pseudobinding free-energy estimation to specifically probe the importance of residue side-chains to the stability of protein fold. A decision tree was designed to optimally combine these three parameters. The result was compared to the functional data stored in the International Agency for Research on Cancer (IARC) TP53 mutation database. The final prediction achieved a prediction accuracy of 70% and a Matthews correlation coefficient of 0.45. It also showed a high specificity of 91.8%. Mutations in the 85 correctly identified important residues represented 81.7% of the total mutations recorded in the database. In addition, the method was able to correctly assign a probable functional or structural role to the residues. Such information could be critical for the interpretation and prediction of the effect of missense mutations, as it not only provided the fundamental explanation of the observed effect, but also helped design the most appropriate laboratory experiment to verify the prediction results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The asphalt concrete (AC) dynamic modulus (|E*|) is a key design parameter in mechanistic-based pavement design methodologies such as the American Association of State Highway and Transportation Officials (AASHTO) MEPDG/Pavement-ME Design. The objective of this feasibility study was to develop frameworks for predicting the AC |E*| master curve from falling weight deflectometer (FWD) deflection-time history data collected by the Iowa Department of Transportation (Iowa DOT). A neural networks (NN) methodology was developed based on a synthetically generated viscoelastic forward solutions database to predict AC relaxation modulus (E(t)) master curve coefficients from FWD deflection-time history data. According to the theory of viscoelasticity, if AC relaxation modulus, E(t), is known, |E*| can be calculated (and vice versa) through numerical inter-conversion procedures. Several case studies focusing on full-depth AC pavements were conducted to isolate potential backcalculation issues that are only related to the modulus master curve of the AC layer. For the proof-of-concept demonstration, a comprehensive full-depth AC analysis was carried out through 10,000 batch simulations using a viscoelastic forward analysis program. Anomalies were detected in the comprehensive raw synthetic database and were eliminated through imposition of certain constraints involving the sigmoid master curve coefficients. The surrogate forward modeling results showed that NNs are able to predict deflection-time histories from E(t) master curve coefficients and other layer properties very well. The NN inverse modeling results demonstrated the potential of NNs to backcalculate the E(t) master curve coefficients from single-drop FWD deflection-time history data, although the current prediction accuracies are not sufficient to recommend these models for practical implementation. Considering the complex nature of the problem investigated with many uncertainties involved, including the possible presence of dynamics during FWD testing (related to the presence and depth of stiff layer, inertial and wave propagation effects, etc.), the limitations of current FWD technology (integration errors, truncation issues, etc.), and the need for a rapid and simplified approach for routine implementation, future research recommendations have been provided making a strong case for an expanded research study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chronic kidney disease (CKD), impairment of kidney function, is a serious public health problem, and the assessment of genetic factors influencing kidney function has substantial clinical relevance. Here, we report a meta-analysis of genome-wide association studies for kidney function-related traits, including 71,149 east Asian individuals from 18 studies in 11 population-, hospital- or family-based cohorts, conducted as part of the Asian Genetic Epidemiology Network (AGEN). Our meta-analysis identified 17 loci newly associated with kidney function-related traits, including the concentrations of blood urea nitrogen, uric acid and serum creatinine and estimated glomerular filtration rate based on serum creatinine levels (eGFRcrea) (P < 5.0 × 10(-8)). We further examined these loci with in silico replication in individuals of European ancestry from the KidneyGen, CKDGen and GUGC consortia, including a combined total of ∼110,347 individuals. We identify pleiotropic associations among these loci with kidney function-related traits and risk of CKD. These findings provide new insights into the genetics of kidney function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cross-recognition of peptides by cytotoxic T lymphocytes is a key element in immunology and in particular in peptide based immunotherapy. Here we develop three-dimensional (3D) quantitative structure-activity relationships (QSARs) to predict cross-recognition by Melan-A-specific cytotoxic T lymphocytes of peptides bound to HLA A*0201 (hereafter referred to as HLA A2). First, we predict the structure of a set of self- and pathogen-derived peptides bound to HLA A2 using a previously developed ab initio structure prediction approach [Fagerberg et al., J. Mol. Biol., 521-46 (2006)]. Second, shape and electrostatic energy calculations are performed on a 3D grid to produce similarity matrices which are combined with a genetic neural network method [So et al., J. Med. Chem., 4347-59 (1997)] to generate 3D-QSAR models. The models are extensively validated using several different approaches. During the model generation, the leave-one-out cross-validated correlation coefficient (q (2)) is used as the fitness criterion and all obtained models are evaluated based on their q (2) values. Moreover, the best model obtained for a partitioned data set is evaluated by its correlation coefficient (r = 0.92 for the external test set). The physical relevance of all models is tested using a functional dependence analysis and the robustness of the models obtained for the entire data set is confirmed using y-randomization. Finally, the validated models are tested for their utility in the setting of rational peptide design: their ability to discriminate between peptides that only contain side chain substitutions in a single secondary anchor position is evaluated. In addition, the predicted cross-recognition of the mono-substituted peptides is confirmed experimentally in chromium-release assays. These results underline the utility of 3D-QSARs in peptide mimetic design and suggest that the properties of the unbound epitope are sufficient to capture most of the information to determine the cross-recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractBACKGROUND: Scientists have been trying to understand the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. For some diseases, it has become evident that it is not enough to obtain a catalogue of the disease-related genes but to uncover how disruptions of molecular networks in the cell give rise to disease phenotypes. Moreover, with the unprecedented wealth of information available, even obtaining such catalogue is extremely difficult.PRINCIPAL FINDINGS: We developed a comprehensive gene-disease association database by integrating associations from several sources that cover different biomedical aspects of diseases. In particular, we focus on the current knowledge of human genetic diseases including mendelian, complex and environmental diseases. To assess the concept of modularity of human diseases, we performed a systematic study of the emergent properties of human gene-disease networks by means of network topology and functional annotation analysis. The results indicate a highly shared genetic origin of human diseases and show that for most diseases, including mendelian, complex and environmental diseases, functional modules exist. Moreover, a core set of biological pathways is found to be associated with most human diseases. We obtained similar results when studying clusters of diseases, suggesting that related diseases might arise due to dysfunction of common biological processes in the cell.CONCLUSIONS: For the first time, we include mendelian, complex and environmental diseases in an integrated gene-disease association database and show that the concept of modularity applies for all of them. We furthermore provide a functional analysis of disease-related modules providing important new biological insights, which might not be discovered when considering each of the gene-disease association repositories independently. Hence, we present a suitable framework for the study of how genetic and environmental factors, such as drugs, contribute to diseases.AVAILABILITY: The gene-disease networks used in this study and part of the analysis are available at http://ibi.imim.es/DisGeNET/DisGeNETweb.html#Download

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: HIV-infected individuals have an increased risk of myocardial infarction. Antiretroviral therapy (ART) is regarded as a major determinant of dyslipidemia in HIV-infected individuals. Previous genetic studies have been limited by the validity of the single-nucleotide polymorphisms (SNPs) interrogated and by cross-sectional design. Recent genome-wide association studies have reliably associated common SNPs to dyslipidemia in the general population. METHODS AND RESULTS: We validated the contribution of 42 SNPs (33 identified in genome-wide association studies and 9 previously reported SNPs not included in genome-wide association study chips) and of longitudinally measured key nongenetic variables (ART, underlying conditions, sex, age, ethnicity, and HIV disease parameters) to dyslipidemia in 745 HIV-infected study participants (n=34 565 lipid measurements; median follow-up, 7.6 years). The relative impact of SNPs and ART to lipid variation in the study population and their cumulative influence on sustained dyslipidemia at the level of the individual were calculated. SNPs were associated with lipid changes consistent with genome-wide association study estimates. SNPs explained up to 7.6% (non-high-density lipoprotein cholesterol), 6.2% (high-density lipoprotein cholesterol), and 6.8% (triglycerides) of lipid variation; ART explained 3.9% (non-high-density lipoprotein cholesterol), 1.5% (high-density lipoprotein cholesterol), and 6.2% (triglycerides). An individual with the most dyslipidemic antiretroviral and genetic background had an approximately 3- to 5-fold increased risk of sustained dyslipidemia compared with an individual with the least dyslipidemic therapy and genetic background. CONCLUSIONS: In the HIV-infected population treated with ART, the weight of the contribution of common SNPs and ART to dyslipidemia was similar. When selecting an ART regimen, genetic information should be considered in addition to the dyslipidemic effects of ART agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a customizable system used to develop a collaborative multi-user problem solving game. It addresses the increasing demand for appealing informal learning experiences in museum-like settings. The system facilitates remote collaboration by allowing groups of learners tocommunicate through a videoconferencing system and by allowing them to simultaneously interact through a shared multi-touch interactive surface. A user study with 20 user groups indicates that the game facilitates collaboration between local and remote groups of learners. The videoconference and multitouch surface acted as communication channels, attracted students’ interest, facilitated engagement, and promoted inter- and intra-group collaboration—favoring intra-group collaboration. Our findings suggest that augmentingvideoconferencing systems with a shared multitouch space offers newpossibilities and scenarios for remote collaborative environments and collaborative learning.