912 resultados para Static strength performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the moveable bridges use open grid steel decks, because these are factory assembled, light-weight, and easy to install. Open grid steel decks, however, are not as skid resistant as solid decks. Costly maintenance, high noise levels, poor riding comfort and susceptibility to vibrations are among the other disadvantages of these decks. The major objective of this research was to develop alternative deck systems which weigh no more than 25 lb/ft2, have solid riding surface, are no more than 4–5 in. thick and are able to withstand prescribed loading. Three deck systems were considered in this study: ultra-high performance concrete (UHPC) deck, aluminum deck and UHPC-fiber reinforced polymer (FRP) tube deck. UHPC deck was the first alternative system developed as a part of this project. Due to its ultra high strength, this type of concrete results in thinner sections, which helps satisfy the strict self-weight limit. A comprehensive experimental and analytical evaluation of the system was carried out to establish its suitability. Both single and multi-unit specimens with one or two spans were tested for static and dynamic loading. Finite element models were developed to predict the deck behavior. The study led to the conclusion that the UHPC bridge deck is a feasible alternative to open grid steel deck. Aluminum deck was the second alternative system studied in this project. A detailed experimental and analytical evaluation of the system was carried out. The experimental work included static and dynamic loading on the deck panels and connections. Analytical work included detailed finite element modeling. Based on the in-depth experimental and analytical evaluations, it was concluded that aluminum deck was a suitable alternative to open grid steel decks and is ready for implementation. UHPC-FRP tube deck was the third system developed in this research. Prestressed hollow core decks are commonly used, but the proposed type of steel-free deck is quite novel. Preliminary experimental evaluations of two simple-span specimens, one with uniform section and the other with tapered section were carried out. The system was shown to have good promise to replace the conventional open grid decks. Additional work, however, is needed before the system is recommended for field application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation of the performance of engineering project organizations is critical for understanding and eliminating inefficiencies in today’s dynamic global markets. The existing theoretical frameworks consider project organizations as monolithic systems and attribute the performance of project organizations to the characteristics of the constituents. However, project organizations consist of complex interdependent networks of agents, information, and resources whose interactions give rise to emergent properties that affect the overall performance of project organizations. Yet, our understanding of the emergent properties in project organizations and their impact on project performance is rather limited. This limitation is one of the major barriers towards creation of integrated theories of performance assessment in project organizations. The objective of this paper is to investigate the emergent properties that affect the ability of project organization to cope with uncertainty. Based on the theories of complex systems, we propose and test a novel framework in which the likelihood of performance variations in project organizations could be investigated based on the environment of uncertainty (i.e., static complexity, dynamic complexity, and external source of disruption) as well as the emergent properties (i.e., absorptive capacity, adaptive capacity, and restorative capacity) of project organizations. The existence and significance of different dimensions of the environment of uncertainty and emergent properties in the proposed framework are tested based on the analysis of the information collected from interviews with senior project managers in the construction industry. The outcomes of this study provide a novel theoretical lens for proactive bottom-up investigation of performance in project organizations at the interface of emergent properties and uncertainty

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disk drives are the bottleneck in the processing of large amounts of data used in almost all common applications. File systems attempt to reduce this by storing data sequentially on the disk drives, thereby reducing the access latencies. Although this strategy is useful when data is retrieved sequentially, the access patterns in real world workloads is not necessarily sequential and this mismatch results in storage I/O performance degradation. This thesis demonstrates that one way to improve the storage performance is to reorganize data on disk drives in the same way in which it is mostly accessed. We identify two classes of accesses: static, where access patterns do not change over the lifetime of the data and dynamic, where access patterns frequently change over short durations of time, and propose, implement and evaluate layout strategies for each of these. Our strategies are implemented in a way that they can be seamlessly integrated or removed from the system as desired. We evaluate our layout strategies for static policies using tree-structured XML data where accesses to the storage device are mostly of two kinds - parent-tochild or child-to-sibling. Our results show that for a specific class of deep-focused queries, the existing file system layout policy performs better by 5-54X. For the non-deep-focused queries, our native layout mechanism shows an improvement of 3-127X. To improve performance of the dynamic access patterns, we implement a self-optimizing storage system that performs rearranges popular block accesses on a dedicated partition based on the observed workload characteristics. Our evaluation shows an improvement of over 80% in the disk busy times over a range of workloads. These results show that applying the knowledge of data access patterns for allocation decisions can substantially improve the I/O performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As part of a multi-university research program funded by NSF, a comprehensive experimental and analytical study of seismic behavior of hybrid fiber reinforced polymer (FRP)-concrete column is presented in this dissertation. Experimental investigation includes cyclic tests of six large-scale concrete-filled FRP tube (CFFT) and RC columns followed by monotonic flexural tests, a nondestructive evaluation of damage using ultrasonic pulse velocity in between the two test sets and tension tests of sixty-five FRP coupons. Two analytical models using ANSYS and OpenSees were developed and favorably verified against both cyclic and monotonic flexural tests. The results of the two methods were compared. A parametric study was also carried out to investigate the effect of three main parameters on primary seismic response measures. The responses of typical CFFT columns to three representative earthquake records were also investigated. The study shows that only specimens with carbon FRP cracked, whereas specimens with glass or hybrid FRP did not show any visible cracks throughout cyclic tests. Further monotonic flexural tests showed that carbon specimens both experienced flexural cracks in tension and crumpling in compression. Glass or hybrid specimens, on the other hand, all showed local buckling of FRP tubes. Compared with conventional RC columns, CFFT column possesses higher flexural strength and energy dissipation with an extended plastic hinge region. Among all CFFT columns, the hybrid lay-up demonstrated the highest flexural strength and initial stiffness, mainly because of its high reinforcement index and FRP/concrete stiffness ratio, respectively. Moreover, at the same drift ratio, the hybrid lay-up was also considered as the best in term of energy dissipation. Specimens with glassfiber tubes, on the other hand, exhibited the highest ductility due to better flexibility of glass FRP composites. Furthermore, ductility of CFFTs showed a strong correlation with the rupture strain of FRP. Parametric study further showed that different FRP architecture and rebar types may lead to different failure modes for CFFT columns. Transient analysis of strong ground motions showed that the column with off-axis nonlinear filament-wound glass FRP tube exhibited a superior seismic performance to all other CFFTs. Moreover, higher FRP reinforcement ratios may lead to a brittle system failure, while a well-engineered FRP reinforcement configuration may significantly enhance the seismic performance of CFFT columns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the synthesis, characterization and study of the associative behaviour in aqueous media of new responsive graft copolymers, based on carboxymethylcellulose as the water-soluble backbone and Jeffamine® M-2070 e Jeffamine® M-600 (commercial polyetheramines) as the thermoresponsive grafts with high cloud point temperatures in water. The synthesis was performed on aqueous medium, by using 1-ethyl-3- (3-(dimethylamino)-propyl)carbodiimide hydrochloride and N-hydroxysuccinimide as activators of the reaction between carboxylategroupsfrom carboxymethylcellulose and amino groups from polyetheramines. The grafting reaction was confirmed by infrared spectroscopy and the grafting percentage by 1H NMR. The molar mass of the polyetheramines was determined by 1H NMR, whereas the molar mass of CMC and graft copolymers was determined by static light scattering. The salt effect on the association behaviour of the copolymers was evaluated in different aqueous media (Milli-Q water, 0.5M NaCl, 0.5M K2CO3 and synthetic sea water), at different temperatures, through UV-vis, rheology and dynamic light scattering. None of the copolymers solutions, at 5 g/L, turned turbid in Milli-Q water when heated from 25 to 95 °C, probably because of the increase in hydrophibicity promoted by CMC backbone. However, they became turbid in the presence of salts, due to the salting out effect, where the lowest cloud point was observed in 0.5M K2CO3, which was attributed to the highest ionic strength in water, combined to the ability of CO3 2- to decrease polymer-solvents interactions. The hydrodynamic radius and apparent viscosity of the copolymers in aqueous medium changed as a function of salts dissolved in the medium, temperature and copolymer composition. Thermothickening behaviour was observed in 0.5M K2CO3 when the temperature was raised from 25 to 60°C. This performance can be attributed to intermolecular associations as a physical network, since the temperature is above the cloud point of the copolymers in this solvent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to fundamentally characterize the laboratory performance of traditional hot mix asphalt (HMA) mixtures incorporating high RAP content and waste tire crumb rubber through their fundamental engineering properties. The nominal maximum aggregates size was chosen for this research was 12mm (considering the limitation of aggregate size for surface layer) and both coarse and fine aggregates are commonly used in Italy that were examined and analyzed in this study. On the other hand, the RAP plays an important role in reducing production costs and enhancing the environmentally sustainable pavements instead of using virgin materials in HMA. Particularly, this study has aimed to use 30% of RAP content (25% fine aggregate RAP and 5% coarse aggregate RAP) and 1% of CR additives by the total weight of aggregates for mix design. The mixture of aggregates, RAP and CR were blended with different amount of unmodified binder through dry processes. Generally, the main purposes of this study were investigating on capability of using RAP and CR in dense graded HMA and comparing the performance of rejuvenator in RAP with CR. In addition, based on the engineering analyses during the study, we were able compare the fundamental Indirect Tensile Strength (ITS) value of dense graded HMA and also mechanical characteristics in terms of Indirect Tensile Stiffness Modulus (ITSM). In order to get an extended comparable data, four groups of different mixtures such as conventional mixture with only virgin aggregates (DV), mixture with RAP (DR), mixture with RAP and rejuvenator (DRR), and mixture with RAP, rejuvenator, CR (DRRCr) were investigated in this research experimentally. Finally, the results of those tests indicated that the mixtures with RAP and CR had the high stiffness and less thermal sensitivity, while the mixture with virgin aggregates only had very low values in comparison.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This paper aims to explore the role of internal and external knowledgebased linkages across the supply chain in achieving better operational performance. It investigates how knowledge is accumulated, shared, and applied to create organization-specific knowledge resources that increase and sustain the organization's competitive advantage. Design/methodology/approach: This paper uses a single case study with multiple, embedded units of analysis, and the social network analysis (SNA) to demonstrate the impact of internal and external knowledge-based linkages across multiple tiers in the supply chain on the organizational operational performance. The focal company of the case study is an Italian manufacturer supplying rubber components to European automotive enterprises. Findings: With the aid of the SNA, the internal knowledge-based linkages can be mapped and visualized. We found that the most central nodes having the most connections with other nodes in the linkages are the most crucial members in terms of knowledge exploration and exploitation within the organization. We also revealed that the effective management of external knowledge-based linkages, such as buyer company, competitors, university, suppliers, and subcontractors, can help improve the operational performance. Research limitations/implications: First, our hypothesis was tested on a single case. The analysis of multiple case studies using SNA would provide a deeper understanding of the relationship between the knowledge-based linkages at all levels of the supply chain and the integration of knowledge. Second, the static nature of knowledge flows was studied in this research. Future research could also consider ongoing monitoring of dynamic linkages and the dynamic characteristic of knowledge flows. Originality/value: To the best of our knowledge, the phrase 'knowledge-based linkages' has not been used in the literature and there is lack of investigation on the relationship between the management of internal and external knowledge-based linkages and the operational performance. To bridge the knowledge gap, this paper will show the importance of understanding the composition and characteristics of knowledge-based linkages and their knowledge nodes. In addition, this paper will show that effective management of knowledge-based linkages leads to the creation of new knowledge and improves organizations' operational performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.

The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.

This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.

Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.

The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.

Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research into the dynamicity of job performance criteria has found evidence suggesting the presence of rank-order changes to job performance scores across time as well as intraindividual trajectories in job performance scores across time. These findings have influenced a large body of research into (a) the dynamicity of validities of individual differences predictors of job performance and (b) the relationship between individual differences predictors of job performance and intraindividual trajectories of job performance. In the present dissertation, I addressed these issues within the context of the Five Factor Model of personality. The Five Factor Model is arranged hierarchically, with five broad higher-order factors subsuming a number of more narrowly tailored personality facets. Research has debated the relative merits of broad versus narrow traits for predicting job performance, but the entire body of research has addressed the issue from a static perspective -- by examining the relative magnitude of validities of global factors versus their facets. While research along these lines has been enlightening, theoretical perspectives suggest that the validities of global factors versus their facets may differ in their stability across time. Thus, research is needed to not only compare the relative magnitude of validities of global factors versus their facets at a single point in time, but also to compare the relative stability of validities of global factors versus their facets across time. Also necessary to advance cumulative knowledge concerning intraindividual performance trajectories is research into broad vs. narrow traits for predicting such trajectories. In the present dissertation, I addressed these issues using a four-year longitudinal design. The results indicated that the validities of global conscientiousness were stable across time, while the validities of conscientiousness facets were more likely to fluctuate. However, the validities of emotional stability and extraversion facets were no more likely to fluctuate across time than those of the factors. Finally, while some personality factors and facets predicted performance intercepts (i.e., performance at the first measurement occasion), my results failed to indicate a significant effect of any personality variable on performance growth. Implications for research and practice are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an alternative to transverse spiral or hoop steel reinforcement, fiber reinforced polymers (FRPs) were introduced to the construction industry in the 1980's. The concept of concrete-filled FRP tube (CFFT) has raised great interest amongst researchers in the last decade. FRP tube can act as a pour form, protective jacket, and shear and flexural reinforcement for concrete. However, seismic performance of CFFT bridge substructure has not yet been fully investigated. Experimental work in this study included four two-column bent tests, several component tests and coupon tests. Four 1/6-scale bridge pier frames, consisting of a control reinforced concrete frame (RCF), glass FRP-concrete frame (GFF), carbon FRP-concrete frame (CFF), and hybrid glass/carbon FRP-concrete frame (HFF) were tested under reverse cyclic lateral loading with constant axial loads. Specimen GFF did not show any sign of cracking at a drift ratio as high as 15% with considerable loading capacity, whereas Specimen CFF showed that lowest ductility with similar load capacity as in Specimen GFF. FRP-concrete columns and pier cap beams were then cut from the pier frame specimens, and were tested again in three point flexure under monotonic loading with no axial load. The tests indicated that bonding between FRP and concrete and yielding of steel both affect the flexural strength and ductility of the components. The coupon tests were carried out to establish the tensile strength and elastic modulus of each FRP tube and the FRP mold for the pier cap beam in the two principle directions of loading. A nonlinear analytical model was developed to predict the load-deflection responses of the pier frames. The model was validated against test results. Subsequently, a parametric study was conducted with variables such as frame height to span ratio, steel reinforcement ratio, FRP tube thickness, axial force, and compressive strength of concrete. A typical bridge was also simulated under three different ground acceleration records and damping ratios. Based on the analytical damage index, the RCF bridge was most severely damaged, whereas the GFF bridge only suffered minor repairable damages. Damping ratio was shown to have a pronounced effect on FRP-concrete bridges, just the same as in conventional bridges. This research was part of a multi-university project, which is founded by the National Science Foundation (NSF) Network for Earthquake Engineering Simulation Research (NEESR) program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.