874 resultados para Tangibility of assets. Asset classes. Machinery
Resumo:
Programmed cell death is characterized by a cascade of tightly controlled events that culminate in the orchestrated death of the cell. In multicellular organisms autophagy and apoptosis are recognized as two principal means by which these genetically determined cell deaths occur. During plant-microbe interactions cell death programs can mediate both resistant and susceptible events. Via oxalic acid (OA), the necrotrophic phytopathogen Sclerotinia sclerotiorum hijacks host pathways and induces cell death in host plant tissue resulting in hallmark apoptotic features in a time and dose dependent manner. OA-deficient mutants are non-pathogenic and trigger a restricted cell death phenotype in the host that unexpectedly exhibits markers associated with the plant hypersensitive response including callose deposition and a pronounced oxidative burst, suggesting the plant can recognize and in this case respond, defensively. The details of this plant directed restrictive cell death associated with OA deficient mutants is the focus of this work. Using a combination of electron and fluorescence microscopy, chemical effectors and reverse genetics, we show that this restricted cell death is autophagic. Inhibition of autophagy rescued the non-pathogenic mutant phenotype. These findings indicate that autophagy is a defense response in this necrotrophic fungus/plant interaction and suggest a novel function associated with OA; namely, the suppression of autophagy. These data suggest that not all cell deaths are equivalent, and though programmed cell death occurs in both situations, the outcome is predicated on who is in control of the cell death machinery. Based on our data, we suggest that it is not cell death per se that dictates the outcome of certain plant-microbe interactions, but the manner by which cell death occurs that is crucial.
Resumo:
The secretion of cytokines by immune cells plays a significant role in determining the course of an inflammatory response. The levels and timing of each cytokine released are critical for mounting an effective but confined response, whereas excessive or dysregulated inflammation contributes to many diseases. Cytokines are both culprits and targets for effective treatments in some diseases. The multiple points and mechanisms that have evolved for cellular control of cytokine secretion highlight the potency of these mediators and the fine tuning required to manage inflammation. Cytokine production in cells is regulated by cell signaling, and at mRNA and protein synthesis levels. Thereafter, the intracellular transport pathways and molecular trafficking machinery have intricate and essential roles in dictating the release and activity of cytokines. The trafficking machinery and secretory (exocytic) pathways are complex and highly regulated in many cells, involving specialized membranes, molecules and organelles that enable these cells to deliver cytokines to often-distinct areas of the cell surface, in a timely manner. This review provides an overview of secretory pathways - both conventional and unconventional - and key families of trafficking machinery. The prevailing knowledge about the trafficking and secretion of a number of individual cytokines is also summarized. In conclusion, we present emerging concepts about the functional plasticity of secretory pathways and their modulation for controlling cytokines and inflammation.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and delivering Value for Money (VfM). As part of the background to this challenge, a critique is given of current practice in the selection of the approach to procure major public sector infrastructure in Australia and which is akin to the Multi-Attribute Utility Approach (MAUA). To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision-making model is presented. The model addresses the make-or-buy decision (risk allocation); the bundling decision (property rights incentives), as well as the exchange relationship decision (relational to arms-length exchange) in its novel approach to articulating a procurement strategy designed to yield superior VfM across the whole life of the asset. The aim of this paper is report on the development of this decisionmaking model in terms of the procedural tasks to be followed and the method being used to test the model. The planned approach to testing the model uses a sample of 87 Australian major infrastructure projects in the sum of AUD32 billion and deploys a key proxy for VfM comprising expressions of interest, as an indicator of competition.
Resumo:
The giant freshwater prawn (Macrobrachium rosenbergii) or GFP is one of the most important freshwater crustacean species in the inland aquaculture sector of many tropical and subtropical countries. Since the 1990’s, there has been rapid global expansion of freshwater prawn farming, especially in Asian countries, with an average annual rate of increase of 48% between 1999 and 2001 (New, 2005). In Vietnam, GFP is cultured in a variety of culture systems, typically in integrated or rotational rice-prawn culture (Phuong et al., 2006) and has become one of the most common farmed aquatic species in the country, due to its ability to grow rapidly and to attract high market price and high demand. Despite potential for expanded production, sustainability of freshwater prawn farming in the region is currently threatened by low production efficiency and vulnerability of farmed stocks to disease. Commercial large scale and small scale GFP farms in Vietnam have experienced relatively low stock productivity, large size and weight variation, a low proportion of edible meat (large head to body ratio), scarcity of good quality seed stock. The current situation highlights the need for a systematic stock improvement program for GFP in Vietnam aimed at improving economically important traits in this species. This study reports on the breeding program for fast growth employing combined (between and within) family selection in giant freshwater prawn in Vietnam. The base population was synthesized using a complete diallel cross including 9 crosses from two local stocks (DN and MK strains) and a third exotic stock (Malaysian strain - MY). In the next three selection generations, matings were conducted between genetically unrelated brood stock to produce full-sib and (paternal) half-sib families. All families were produced and reared separately until juveniles in each family were tagged as a batch using visible implant elastomer (VIE) at a body size of approximately 2 g. After tags were verified, 60 to 120 juveniles chosen randomly from each family were released into two common earthen ponds of 3,500 m2 pond for a grow-out period of 16 to 18 weeks. Selection applied at harvest on body weight was a combined (between and within) family selection approach. 81, 89, 96 and 114 families were produced for the Selection line in the F0, F1, F2 and F3 generations, respectively. In addition to the Selection line, 17 to 42 families were produced for the Control group in each generation. Results reported here are based on a data set consisting of 18,387 body and 1,730 carcass records, as well as full pedigree information collected over four generations. Variance and covariance components were estimated by restricted maximum likelihood fitting a multi-trait animal model. Experiments assessed performance of VIE tags in juvenile GFP of different size classes and individuals tagged with different numbers of tags showed that juvenile GFP at 2 g were of suitable size for VIE tags with no negative effects evident on growth or survival. Tag retention rates were above 97.8% and tag readability rates were 100% with a correct assignment rate of 95% through to mature animal size of up to 170 g. Across generations, estimates of heritability for body traits (body weight, body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width) and carcass weight traits (abdominal weight, skeleton-off weight and telson-off weight) were moderate and ranged from 0.14 to 0.19 and 0.17 to 0.21, respectively. Body trait heritabilities estimated for females were significantly higher than for males whereas carcass weight trait heritabilities estimated for females and males were not significantly different (P > 0.05). Maternal and common environmental effects for body traits accounted for 4 to 5% of the total variance and were greater in females (7 to 10%) than in males (4 to 5%). Genetic correlations among body traits were generally high in both sexes. Genetic correlations between body and carcass weight traits were also high in the mixed sexes. Average selection response (% per generation) for body weight (transformed to square root) estimated as the difference between the Selection and the Control group was 7.4% calculated from least squares means (LSMs), 7.0% from estimated breeding values (EBVs) and 4.4% calculated from EBVs between two consecutive generations. Favourable correlated selection responses (estimated from LSMs) were detected for other body traits (12.1%, 14.5%, 10.4%, 15.5% and 13.3% for body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width, respectively) over three selection generations. Data in the second selection generation showed positive correlated responses for carcass weight traits (8.8%, 8.6% and 8.8% for abdominal weight, skeleton-off weight and telson-off weight, respectively). Data in the third selection generation showed that heritability for body traits were moderate and ranged from 0.06 to 0.11 and 0.11 to 0.22 at weeks 10 and 18, respectively. Body trait heritabilities estimated at week 10 were not significantly lower than at week 18. Genetic correlations between body traits within age and genetic correlations for body traits between ages were generally high. Overall our results suggest that growth rate responds well to the application of family selection and carcass weight traits can also be improved in parallel, using this approach. Moreover, selection for high growth rate in GFP can be undertaken successfully before full market size has been reached. The outcome of this study was production of an improved culture strain of GFP for the Vietnamese culture industry that will be trialed in real farm production environments to confirm the genetic gains identified in the experimental stock improvement program.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
The preferential invasion of particular red blood cell (RBC) age classes may offer a mechanism by which certain species of Plasmodia regulate their population growth. Asexual reproduction of the parasite within RBCs exponentially increases the number of circulating parasites; limiting this explosion in parasite density may be key to providing sufficient time for the parasite to reproduce, and for the host to develop a specific immune response. It is critical that the role of preferential invasion in infection is properly understood to model the within-host dynamics of different Plasmodia species. We develop a simulation model to show that limiting the range of RBC age classes available for invasion is a credible mechanism for restricting parasite density, one which is equally as important as the maximum parasite replication rate and the duration of the erythrocytic cycle. Different species of Plasmodia that regularly infect humans exhibit different preferences for RBC invasion, with all species except P. falciparum appearing to exhibit a combination of characteristics which are able to selfregulate parasite density.
Resumo:
This paper raises questions about the ethical issues that arise for academics and universities when under-graduate students enrol in classes outside of their discipline - classes that are not designed to be multi-disciplinary or introductory. We term these students ‘accidental tourists'. Differences between disciplines in terms of pedagogy, norms, language and understanding may pose challenges for accidental tourists in achieving desired learning outcomes. This paper begins a discussion about whether lecturers and universities have any ethical obligations towards supporting the learning of these students. Recognising that engaging with only one ethical theory leads to a fragmented moral vision, this paper employs a variety of ethical theories to examine any possible moral obligations that may fall upon lecturers and/or universities. In regards to lecturers, the paper critically engages with the ethical theories of utilitarianism, Kantianism and virtue ethics (Aristotle) to determine the extent of any academic duty to accidental tourists. In relation to universities, this paper employs the emerging ethical theory of organisational ethics as a lens through which to critically examine any possible obligations. Organisational ethics stems from the recognition that moral demands also exist for organisations so organisations must be reconceptualised as ethical actors and their policies and practices subject to ethical scrutiny. The analysis in this paper illustrates the challenges faced by lecturers some of whom, we theorise, may experience a form of moral distress facing a conflict between personal beliefs and organisational requirements. It also critically examines the role and responsibilities of universities towards students and towards their staff and the inherent moral tensions between a market model and demands for ‘good' learning experiences. This paper highlights the tensions for academics, between academics and universities and within university policy and indicates the need for greater reflection about this issue, especially given the many constraints facing lecturers and universities.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Proton-bound dimers consisting of two glycerophospholipids with different headgroups were prepared using negative ion electrospray ionization and dissociated in a triple quadrupole mass spectrometer. Analysis of the tandem mass spectra of the dimers using the kinetic method provides, for the first time, an order of acidity for the phospholipid classes in the gas phase of PE < PA << PG < PS < PI. Hybrid density functional calculations on model phospholipids were used to predict the absolute deprotonation enthalpies of the phospholipid classes from isodesmic proton transfer reactions with phosphoric acid. The computational data largely support the experimental acidity trend, with the exception of the relative acidity ranking of the two most acidic phospholipid species. Possible causes of the discrepancy between experiment and theory are discussed and the experimental trend is recommended. The sequence of gas phase acidities for the phospholipid headgroups is found to (1) have little correlation with the relative ionization efficiencies of the phospholipid classes observed in the negative ion electrospray process, and (2) correlate well with fragmentation trends observed upon collisional activation of phospholipid \[M - H](-) anions. (c) 2005 American Society for Mass Spectrometry.
Resumo:
The position(s) of carbon-carbon double bonds within lipids can dramatically affect their structure and reactivity and thus has a direct bearing on biological function. Commonly employed mass spectrometric approaches to the characterization of complex lipids, however, fail to localize sites of unsaturation within the molecular structure and thus cannot distinguish naturally occurring regioisomers. In a recent communication \[Thomas, M. C.; Mitchell, T. W.; Blanksby, S. J. J. Am. Chem. Soc. 2006, 128, 58-59], we have presented a new technique for the elucidation of double bond position in glycerophospholipids using ozone-induced fragmentation within the source of a conventional electrospray ionization mass spectrometer. Here we report the on-line analysis, using ozone electrospray mass spectrometry (OzESI-MS), of a broad range of common unsaturated lipids including acidic and neutral glycerophospholipids, sphingomyelins, and triacylglycerols. All lipids analyzed are found to form a pair of chemically induced fragment ions diagnostic of the position of each double bond(s) regardless of the polarity, the number of charges, or the adduction (e.g., \[M - H](-), \[M - 2H](2-), \[M + H](+), \[M + Na](+), \[M + NH4](+)). The ability of OzESI-MS to distinguish lipids that differ only in the position of the double bonds is demonstrated using the glycerophosphocholine standards, GPCho(9Z-18:1/9Z-18:1) and GPCho(6Z-18:1/6Z-18:1). While these regioisomers cannot be differentiated by their conventional tandem mass spectra, the OzESI-MS spectra reveal abundant fragment ions of distinctive mass-to-charge ratio (m/z). The approach is found to be sufficiently robust to be used in conjunction with the m/z 184 precursor ion scans commonly employed for the identification of phosphocholine-containing lipids in shotgun lipidomic analyses. This tandem OzESI-MS approach was used, in conjunction with conventional tandem mass spectral analysis, for the structural characterization of an unknown sphingolipid in a crude lipid extract obtained from a human lens. The OzESI-MS data confirm the presence of two regioisomers, namely, SM(d18:0/15Z-24:1) and SM(d18:0/17Z-24:1), and suggest the possible presence of a third isomer, SM(d18:0/19Z-24:1), in lower abundance. The data presented herein demonstrate that OzESI-MS is a broadly applicable, on-line approach for structure determination and, when used in conjunction with established tandem mass spectrometric methods, can provide near complete structural characterization of a range of important lipid classes. As such, OzESI-MS may provide important new insight into the molecular diversity of naturally occurring lipids.
Resumo:
In 2001 45% (2.7 billion) of the world’s population of approximately 6.1 billion lived in ‘moderate poverty’ on less than US $ 2 per person per day (World Population Summary, 2012). In the last 60 years there have been many theories attempting to explain development, why some countries have the fastest growth in history, while others stagnate and so far no way has been found to explain the differences. Traditional views imply that development is the aggregation of successes from multiple individual business enterprises, but this ignores the interactions between and among institutions, organisations and individuals in the economy, which can often have unpredictable effects. Complexity Development Theory proposes that by viewing development as an emergent property of society, we can help create better development programs at the organisational, institutional and national levels. This paper asks how the principals of CAS can be used to develop CDT principals used to develop and operate development programs at the bottom of the pyramid in developing economies. To investigate this research question we conduct a literature review to define and describe CDT and create propositions for testing. We illustrate these propositions using a case study of an Asset Based Community Development (ABCD) Program for existing and nascent entrepreneurs in the Democratic Republic of the Congo (DRC). We found evidence that all the principals of CDT were related to the characteristics of CAS. If this is the case, development programs will be able to select which CAS needed to test these propositions.
Resumo:
The coffee components kahweol and cafestol (K/C) have been reported to protect the colon and other organs of the rat against the formation of DNA adducts by 2-amino-1-methyl-6-phenylimidazo[4,5-b] pyridine (PhIP) and aflatoxin B1. PhIP is a cooked-food mutagen to which significant human exposure and a role in colon cancer etiology are attributed, and, interestingly, such cancers appear to develop at a lower rate in consumers of coffees with high amounts of K/C. Earlier studies in rodent liver have shown that a key role in the chemopreventive effect of K/C is likely to be due to the potential of these compounds to induce the detoxification of xenobiotics by glutathione transferase (GST) and to enhance the synthesis of the corresponding co-factor glutathione. However, mutagens like PhIP may also be detoxified by UDP-glucuronosyl transferase (UDPGT) for which data are lacking regarding a potential effect of K/C. Therefore, in the present study, we investigated the effect of K/C on UDPGT and, concomitantly, we studied overall GST and the pattern of individual GST classes, particularly GST-θ, which was not included in earlier experiments. In addition, we analyzed the organ-dependence of these potentially chemopreventive effects. K/C was fed to male F344 rats at 0.122% in the chow for 10 days. Enzyme activities in liver, kidney, lung, colon, salivary gland, pancreas, testis, heart and spleen were quantified using five characteristic substrates and the hepatic protein pattern of GST classes α, μ, and π was studied with affnity chromatography/HPLC. Our study showed that K/C is not only capable of increasing overall GST and GST classes α, μ, and π but also of enhancing UDGPT and GST-θ. All investigated K/C effects were strongest in liver and kidney, and some response was seen in lung and colon but none in the other organs. In summary, our results show that K/C treatment leads to a wide spectrum of increases in phase II detoxification enzymes. Notably, these effects occurred preferentially in the well perfused organs liver and kidney, which may thus not only contribute to local protection but also to anti-carcinogenesis in distant, less stimulated organs such as the colon.
Resumo:
Polymorphisms of glutathione transferases (GST) are important genetic determinants of susceptibility to environmental carcinogens (Rebbeck, 1997). The GSTs are a multigene family of dimeric enzymes involved in detoxification, and, in a few cases, the bioactivation of a variety of xenobiotics (Hayes et al., 1995). The cytosolic GST enzyme family consists of four major classes of enzymes, referred to as alpha, mu, pi and theta. Several members of this family (for example, GSTM1, GSTT1 and GSTP1) are polymorphic in human populations (Wormhoudt et al., 1999). Molecular epidemiology studies have examined the role of GST polymorphisms as susceptibility factors for environmentally and/or occupationally induced cancers (Wormhoudt et al., 1999). In particular, case-control studies showed a relationship between the GSTM1 null genotype and the development of cancer in association with smoking habits, which has been shown for cancers of the respiratory and gastrointestinal tracts as well as other cancer types (Miller et al., 1997). Only a few molecular epidemiological studies addressed the role of GSTT1 and GSTP1 polymorphisms in cancer susceptibility. Since GSTP1 is a key player in biotransformation/bioactivation of benzo(a)pyrene, GSTP1 may be even more important than GSTM1 in the prevention of tobacco-induced cancers (Harries et al., 1997; Harris et al., 1998). To date, this relationship has not been sufficiently addressed in humans. Comprehensive molecular epidemiological studies may add to the current knowledge of the role of GST polymorphisms in cancer susceptibility and extent of the knowledge gained from approaches that used phenotyping, such as GSTM1 activity as it relates to trans-stilbene oxide, or polymerase chain reaction (PCR) based genotyping of polymorphic isoenzymes (Bell et al., 1993; Pemble et al., 1994; Harries et al., 1997).
Resumo:
The purpose of this paper is to review existing knowledge management (KM) practices within the field of asset management, identify gaps, and propose a new approach to managing knowledge for asset management. Existing approaches to KM in the field of asset management are incomplete with the focus primarily on the application of data and information systems, for example the use of an asset register. It is contended these approaches provide access to explicit knowledge and overlook the importance of tacit knowledge acquisition, sharing and application. In doing so, current KM approaches within asset management tend to neglect the significance of relational factors; whereas studies in the knowledge management field have showed that relational modes such as social capital is imperative for ef-fective KM outcomes. In this paper, we argue that incorporating a relational ap-proach to KM is more likely to contribute to the exchange of ideas and the devel-opment of creative responses necessary to improve decision-making in asset management. This conceptual paper uses extant literature to explain knowledge management antecedents and explore its outcomes in the context of asset man-agement. KM is a component in the new Integrated Strategic Asset Management (ISAM) framework developed in conjunction with asset management industry as-sociations (AAMCoG, 2012) that improves asset management performance. In this paper we use Nahapiet and Ghoshal’s (1998) model to explain antecedents of relational approach to knowledge management. Further, we develop an argument that relational knowledge management is likely to contribute to the improvement of the ISAM framework components, such as Organisational Strategic Manage-ment, Service Planning and Delivery. The main contribution of the paper is a novel and robust approach to managing knowledge that leads to the improvement of asset management outcomes.
Resumo:
Spatial variation of seismic ground motions is caused by incoherence effect, wave passage, and local site conditions. This study focuses on the effects of spatial variation of earthquake ground motion on the responses of adjacent reinforced concrete (RC) frame structures. The adjacent buildings are modeled considering soil-structure interaction (SSI) so that the buildings can be interacted with each other under uniform and non-uniform ground motions. Three different site classes are used to model the soil layers of SSI system. Based on fast Fourier transformation (FFT), spatially correlated non-uniform ground motions are generated compatible with known power spectrum density function (PSDF) at different locations. Numerical analyses are carried out to investigate the displacement responses and the absolute maximum base shear forces of adjacent structures subjected to spatially varying ground motions. The results are presented in terms of related parameters affecting the structural response using three different types of soil site classes. The responses of adjacent structures have changed remarkably due to spatial variation of ground motions. The effect can be significant on rock site rather than clay site.