829 resultados para Two Approaches


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advanced age may become a limiting factor for the maintenance of rhythms in organisms, reducing the capacity of generation and synchronization of biological rhythms. In this study, the influence of aging on the expression of endogenous periodicity and synchronization (photic and social) of the circadian activity rhythm (CAR) was evaluated in a diurnal primate, the marmoset (Callithrix jacchus). This study had two approaches: one with longitudinal design, performed with a male marmoset in two different phases: adult (three years) and older (9 y.o.) (study 1) and the second, a transversal approach, with 6 old (♂: 9.7 ± 2.0 y.o.) and 11 adults animals (♂: 4.2 ± 0.8 y.o.) (study 2). The evaluation of the photic synchronization involved two conditions in LD (natural and artificial illuminations). In study 1, the animal was subjected to the following stages: LD (12:12 ~ 350: ~ 2 lx), LL (~ 350 lx) and LD resynchronization. In the second study, the animals were initially evaluated in natural LD, and then the same sequence stages of study 1. During the LL stage in study 2, the vocalizations of conspecifics kept in natural LD on the outside of the colony were considered temporal cue to the social synchronization. The record of the activity was performed automatically at intervals of five minutes through infrared sensor and actimeters, in studies 1 and 2, respectively. In general, the aged showed a more fragmented activity pattern (> IV < H and > PSD, ANOVA, p < 0.05), lower levels of activity (ANOVA, p < 0.05) and shorter duration of active phase (ANOVA, p < 0.05) in LD conditions, when compared to adults. In natural LD, the aged presented phase delay pronounced for onset and offset of active phase (ANOVA, p < 0.05), while the adults had the active phase more adjusted to light phase. Under artificial LD, there was phase advance and greater adjustment of onset and offset of activity in relation to the LD in the aged (ANOVA, p < 0.05). In LL, there was a positive correlation between age and the endogenous period () in the first 20 days (Spearman correlation, p < 0.05), with prolonged  held in two aged animals. In this condition, most adults showed free-running period of the circadian activity rhythm with  < 24 h for the first 30 days and later on relative coordination mediated by auditory cues. In study 2, the cross-correlation analysis between the activity profiles of the animals in LL with control animals kept under natural LD, found that there was less social synchronization in the aged. With the resubmission to the LD, the resynchronization rate was slower in the aged (t-test; p < 0.05) and in just one aged animal there was a loss of resynchronization capability. According to the data set, it is suggested that the aging in marmosets may be related to: 1) lower amplitude and greater fragmentation of the activity, accompanied to phase delay with extension of period, caused by changes in a photic input, in the generation and behavioral expression of the CAR; 2) lower capacity of the circadian activity rhythm to photic synchronization, that can become more robust in artificial lighting conditions, possibly due to the higher light intensities at the beginning of the active phase due to the abrupt transitions between the light and dark phases; and 3) smaller capacity of non-photic synchronization for auditory cues from conspecifics, possibly due to reducing sensory inputs and responsiveness of the circadian oscillators to auditory cues, what can make the aged marmoset most vulnerable, as these social cues may act as an important supporting factor for the photic synchronization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A landfill represents a complex and dynamically evolving structure that can be stochastically perturbed by exogenous factors. Both thermodynamic (equilibrium) and time varying (non-steady state) properties of a landfill are affected by spatially heterogenous and nonlinear subprocesses that combine with constraining initial and boundary conditions arising from the associated surroundings. While multiple approaches have been made to model landfill statistics by incorporating spatially dependent parameters on the one hand (data based approach) and continuum dynamical mass-balance equations on the other (equation based modelling), practically no attempt has been made to amalgamate these two approaches while also incorporating inherent stochastically induced fluctuations affecting the process overall. In this article, we will implement a minimalist scheme of modelling the time evolution of a realistic three dimensional landfill through a reaction-diffusion based approach, focusing on the coupled interactions of four key variables - solid mass density, hydrolysed mass density, acetogenic mass density and methanogenic mass density, that themselves are stochastically affected by fluctuations, coupled with diffusive relaxation of the individual densities, in ambient surroundings. Our results indicate that close to the linearly stable limit, the large time steady state properties, arising out of a series of complex coupled interactions between the stochastically driven variables, are scarcely affected by the biochemical growth-decay statistics. Our results clearly show that an equilibrium landfill structure is primarily determined by the solid and hydrolysed mass densities only rendering the other variables as statistically "irrelevant" in this (large time) asymptotic limit. The other major implication of incorporation of stochasticity in the landfill evolution dynamics is in the hugely reduced production times of the plants that are now approximately 20-30 years instead of the previous deterministic model predictions of 50 years and above. The predictions from this stochastic model are in conformity with available experimental observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN]The work presented in this paper is related to Depth Recovery from Focus The approach starts calibrating focal length of the camera using the Gaussian lens law for the thin lens camera model Two approaches are presented based on the availability of the internal distance of the lens

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fruitful research on durability of paving asphalts may come from two approaches: The improvement of the asphalt for durability; The development of relatively rapid laboratory tests which will enable the design engineer to select or to specify an asphalt based on quality and to make a correct estimate of the service life of a selected asphalt when used in a specific paving mixture. Research Project HR-124, "Development of a Laboratory Durability Test for Asphalts," sponsored by the Iowa Highway Research Board is in the second category and was intended to be the initial stage of an overall study in the development of a durability test for paving asphalts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current study analyzes the birth and development of two strategic alliances established between shrimp producers in Rio Grande do Norte: the Unipesca and the Coopercam. To achieve this aim, two approaches which, at first sight, could be considered contradictory were used: the Transactional Costs Economy and Embeddedness. The first approach is fundamentally based in the studies of Williamson (1985; 1991; 1996; 1999; 2000; 2002). Embededness, on the other hand, went through the review of a series of authors, such as Burt (1992), Granovetter (1973; 1985), Uzzi (1997), Gulati (1994; 1995; 1997; 1998; 1999; 2000), Nielsen (2005), Ring (2002), Ring and Van de Ven (1994), Zafirovski (2002), among others. To analyze the birth and development of the cooperatives in this study, Gulati s work (1998) was used. This study shows the steps to be studied for a better comprehension of an alliance: the decision of starting an alliance and the choice of the partners, the decision about the governance structure, the evolution of the alliance and the development of the companies which established this partnership. To carry this study out, a study case accordingly to Yin s proposal (2001) was adopted. Semi-structured interviews with pre-defined plots were conducted in two phases: in the beginning of 2006 and in the beginning of 2007. The subjects from the research were, in 2006, representative members of the main associations and corporations, besides the shrimp producers from the state, when the context of the activity was set. In the second phase, in 2007, representative members from the two cooperatives that were listed above were interviewed the president from Coopercam and the marketing manager from Unipesca. Besides these two members, directors from two important organizations in each of these cooperatives were also interviewed, giving out the necessary information for the research. Secondary data was also collected from the Brazilian Association of Crab producers website, as well as from news from important newspapers in RN, such as Tribuna do Norte. The primary data was analyzed in terms of quality, accordingly to the documental analysis technique. Thus, through the data that was collected, it can be concluded that the reasons that motivated the companies to cooperate can be explained in terms of the transactional costs economy. However, the choice of partners is more connected to aspects approached by the social embededness. When aspects related to development and evolution were analyzed, it could be seen that both aspects from TCE and Embededness were vital to explain the development of the cooperatives mentioned

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Structuring integrated social-ecological systems (SES) research remains a core challenge for achieving sustainability. Numerous concepts and frameworks exist, but there is a lack of mutual learning and orientation of knowledge between them. We focus on two approaches in particular: the ecosystem services concept and Elinor Ostrom’s diagnostic SES framework. We analyze the strengths and weaknesses of each and discuss their potential for mutual learning. We use knowledge types in sustainability research as a boundary object to compare the contributions of each approach. Sustainability research is conceptualized as a multi-step knowledge generation process that includes system, target, and transformative knowledge. A case study of the Southern California spiny lobster fishery is used to comparatively demonstrate how each approach contributes a different lens and knowledge when applied to the same case. We draw on this case example in our discussion to highlight potential interlinkages and areas for mutual learning. We intend for this analysis to facilitate a broader discussion that can further integrate SES research across its diverse communities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding the dynamics of blood cells is a crucial element to discover biological mechanisms, to develop new efficient drugs, design sophisticated microfluidic devices, for diagnostics. In this work, we focus on the dynamics of red blood cells in microvascular flow. Microvascular blood flow resistance has a strong impact on cardiovascular function and tissue perfusion. The flow resistance in microcirculation is governed by flow behavior of blood through a complex network of vessels, where the distribution of red blood cells across vessel cross-sections may be significantly distorted at vessel bifurcations and junctions. We investigate the development of blood flow and its resistance starting from a dispersed configuration of red blood cells in simulations for different hematocrits, flow rates, vessel diameters, and aggregation interactions between red blood cells. Initially dispersed red blood cells migrate toward the vessel center leading to the formation of a cell-free layer near the wall and to a decrease of the flow resistance. The development of cell-free layer appears to be nearly universal when scaled with a characteristic shear rate of the flow, which allows an estimation of the length of a vessel required for full flow development, $l_c \approx 25D$, with vessel diameter $D$. Thus, the potential effect of red blood cell dispersion at vessel bifurcations and junctions on the flow resistance may be significant in vessels which are shorter or comparable to the length $l_c$. The presence of aggregation interactions between red blood cells lead in general to a reduction of blood flow resistance. The development of the cell-free layer thickness looks similar for both cases with and without aggregation interactions. Although, attractive interactions result in a larger cell-free layer plateau values. However, because the aggregation forces are short-ranged at high enough shear rates ($\bar{\dot{\gamma}} \gtrsim 50~\text{s}^{-1}$) aggregation of red blood cells does not bring a significant change to the blood flow properties. Also, we develop a simple theoretical model which is able to describe the converged cell-free-layer thickness with respect to flow rate assuming steady-state flow. The model is based on the balance between a lift force on red blood cells due to cell-wall hydrodynamic interactions and shear-induced effective pressure due to cell-cell interactions in flow. We expect that these results can also be used to better understand the flow behavior of other suspensions of deformable particles such as vesicles, capsules, and cells. Finally, we investigate segregation phenomena in blood as a two-component suspension under Poiseuille flow, consisting of red blood cells and target cells. The spatial distribution of particles in blood flow is very important. For example, in case of nanoparticle drug delivery, the particles need to come closer to microvessel walls, in order to adhere and bring the drug to a target position within the microvasculature. Here we consider that segregation can be described as a competition between shear-induced diffusion and the lift force that pushes every soft particle in a flow away from the wall. In order to investigate the segregation, on one hand, we have 2D DPD simulations of red blood cells and target cell of different sizes, on the other hand the Fokker-Planck equation for steady state. For the equation we measure force profile, particle distribution and diffusion constant across the channel. We compare simulation results with those from the Fokker-Planck equation and find a very good correspondence between the two approaches. Moreover, we investigate the diffusion behavior of target particles for different hematocrit values and shear rates. Our simulation results indicate that diffusion constant increases with increasing hematocrit and depends linearly on shear rate. The third part of the study describes development of a simulation model of complex vascular geometries. The development of the model is important to reproduce vascular systems of small pieces of tissues which might be gotten from MRI or microscope images. The simulation model of the complex vascular systems might be divided into three parts: modeling the geometry, developing in- and outflow boundary conditions, and simulation domain decomposition for an efficient computation. We have found that for the in- and outflow boundary conditions it is better to use the SDPD fluid than DPD one because of the density fluctuations along the channel of the latter. During the flow in a straight channel, it is difficult to control the density of the DPD fluid. However, the SDPD fluid has not that shortcoming even in more complex channels with many branches and in- and outflows because the force acting on particles is calculated also depending on the local density of the fluid.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a methodology to explore the impact on poverty of the public spending on education. The methodology consists of two approaches: Benefit Incidence Analysis (BIA) and behavioral approach. BIA considers the cost and use of the educational service, and the distribution of the benefits among groups of income. Regarding the behavioral approach, we use a Probit model of schooling attendance, in order to determinethe influence of public spending on the probability for thepoor to attend the school. As a complement, a measurement of targeting errors in the allocation of public spending is included in the methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O desenvolvimento completo das raízes demora pelo menos dois a três anos a completar-se após a colocação da peça dentária na cavidade oral, na medida em que manutenção da vitalidade pulpar é de extrema importância, de modo a permitir o completo desenvolvimento dentário e radicular. Perante um diagnóstico de necrose pulpar num dente imaturo o objectivo do Médico Dentista deverá ser sempre permitir a maturação completa das raízes e consequente encerramento apical. Deste modo, podemos considerar que existem essencialmente duas abordagem terapêuticas em dentes imaturos necrosados: a apexificação e a revascularização. A apexificação é considerada como o tratamento standard de dentes imaturos, uma vez que induz a formação de uma barreira apical calcificada. No entanto, este procedimento tem diversas desvantagens, nomeadamente o risco de fractura radicular uma vez que não proporciona um espessamento das paredes da raiz deixando-as assim finas e susceptíveis à fractura. Nos últimos anos um novo tratamento, como alternativa à apexificação, tem vindo a ser estudado. A revascularização consiste no transporte de factores de crescimento para o espaço intracanalar através da estimulação do sangramento dos tecidos periapicais e, deste modo, promover de forma natural um espessamento e completo desenvolvimento das raízes. Assim, comparativamente à apexificação a revascularização apresenta algumas vantagens, podendo, num futuro próximo, tornar-se no tratamento de eleição aquando de um diagnóstico de necrose num dente imaturo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information concerning the run-time behaviour of programs ("program profiling") can be of the greatest assistance in improving program efficiency. Two software devices have been developed for use on ICL 1900 Series machines to provide such information. DIDYMUS is probabilistic in approach and uses multi- tasking facilities to sample the instruction addresses used by a program at run time. It will work regardless of the source language of the program and matches the detected addresses against a loader map to produce a histogram. SCAMP is restricted to profiling Algol 68-R programs, but provides deterministic information concerning those language constructs that are monitored. Procedure calls to appropriate counting routines are inserted into the source text in a pre-pass prior to compilation. The profile information is printed out at the end of the program run. It has been found that these two approaches complement each other very effectively.