886 resultados para Direct Analysis Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a ‘black box’ and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws. In industry-based studies, known as ‘process analyses’, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, ‘input-output analysis’, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable. Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions. Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects. A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a ‘black box’, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, ‘direct energy paths’). A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a ‘product quantity’ (for example, the total tonnes of cement used) and a ‘direct energy intensity’ (for example, the energy required directly for cement manufacture, per tonne). The input-output model was decomposed into direct energy paths for the ‘residential building construction’ sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for ‘residential building construction’. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects. A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because: • typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity); • of the complexity of the paths derived for ‘residential building construction’; and • of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities. While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for ‘residential building construction’. Consequently, both the value and relative importance of the direct energy paths for ‘residential building construction’ were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of products—even in non-standard cases. ‘Residential building construction’ was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for ‘residential building construction’. For more complex sectors than ‘residential building construction’, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component). To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, ‘banking’) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor). However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework. Plain language summary Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated. This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a ‘black box’ subject to errors. A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements. These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, construction delay disputes often end up on the arbitration route where the delay experts appointed by the parties advise the tribunal on the extension of times entitlements of the parties. For this purpose, the identification and quantification of concurrent and pacing delays are integral aspects of resolving these disputes using a proper delay analysis methodology. The aim of the study is therefore, threefold. Firstly, the available literature on the concurrent and pacing delays are analyzed in detail to establish the principles for the evaluation of the concurrency and pacing delays. Secondly, a robust delay analysis methodology called ‘windows impact/update method’ is explained often used by the experts for the effective quantification of concurrent and pacing delays. This methodology is an improved version of time impact analysis and normal windows analysis. For better demonstration, the explanation of the methodology is facilitated with the help of a typical case study analysis. Finally, the principles of concurrency and pacing, as explained in the literature review, are promptly applied to the case study results to show the applicability of the analysis method on any types of delay disputes. The study shows the effectiveness of the windows impact/update method for the quantification of the concurrent and pacing delays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the crystallization rates and spherulitic growth rate of miscible blends of poly(vinylidene fluoride) (PVDF) and acrylic rubber (ACM) were determined using differential scanning calorimetry (DSC), real-time FTIR, and optical microscopy. FTIR results suggest that blending does not induce the creation of polymorphic crystalline forms of PVDF. SAXS data demonstrate the formation of interlamellar structure after blending. The fold surface-free energy (σ e) was analyzed and compared using different thermal analysis techniques. The isothermal crystallization curves obtained using real-time FTIR and DSC explored in two different methods: t 1/2 or Avrami equation. While the Avrami equation is more widespread and precise, both analytical methods gave similar free energy of folding values. However, it was found that the direct optical method of measuring spherulitic growth rate yields σ e values 30-50 % lower than those obtained from the overall crystallization rate data. Conversely, the σ e values were found to increase with increasing amorphous ACM phase content regardless of the analytical methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tungsten carbide coating on the integrated platform of a transversely heated graphite atomizer (THGA((R))) used together with Pd(NO3)(2) + Mg(NO3)(2) as modifier is proposed for the direct determination of lead in vinegar by graphite furnace atomic absorption spectrometry. The optimized heating program (temperature, ramp time, hold time) of atomizer involved drying stage (110 degrees C, 5 s, 30 s; 130 degrees C, 5 s, 30 s), pyrolysis stage (1000 degrees C, 15 s, 30 s), atomization stage (1800 degrees C, 0 s, 5 s) and clean-out stage (2450 degrees C, I s, 3 s). For 10 mu L of vinegar delivered into the atomizer and calibration using working standard solutions (2.5-20.0 mu g L-1 Pb) in 0.2% (v/v) HNO3, analytical curve with good linear correlation (r = 0.9992) was established. The characteristic mass was 40 pg Pb and the lifetime of the tube was around 730 firings. The limit of detection (LOD) was 0.4 mu g L-1 and the relative standard deviations (n = 12) were typically <8% for a sample containing 25 pg L-1 Pb. Accuracy of the proposed method was checked after direct analysis of 23 vinegar samples. A paired t-test showed that results were in agreement at 95% confidence level with those obtained for acid-digested vinegar samples. The Pb levels varied from 2.8 to 32.4 pg L-1. Accuracy was also checked by means of addition/recovery tests and recovered values varied from 90% to 110%. Additionally, two certified reference materials were analyzed and results were in agreement with certified values at a 95% confidence level. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method has been developed for the direct determination of As in sugar by graphite furnace atomic absorption spectrometry with a transversely heated graphite atomizer (end-capped THGA) and longitudinal Zeeman-effect background correction. The thermal behavior of As during the pyrolysis and atomization steps was investigated in sugar solutions containing 0.2% (v/v) HNO3 using Pd, Ni, and a mixture of Pd + Mg as the chemical modifiers. For a 60-muL sugar solution, an aliquot of 8% (m/v) in 0.2% (v/v)HNO3 was dispensed into a pre-heated graphite tube at 70 degreesC. Linear analytical curves were obtained in the 0.25 - 1.50-mug L-1 As range. Using 5 mug Pd and a first pyrolysis step at 600 degreesC assisted by air during 40 s, the formation of a large amount of carbonaceous residue inside the atomizer was avoided. The characteristic mass was calculated as 24 pg As and the lifetime of the graphite tube was around 280 firings. The limit of detection (L.O.D.) based on integrated absorbance was 0.08 mug L-1 (4.8 pg As) and the typical relative standard deviation (n = 12) was 7% for a sugar solution containing 0.5 mug L-1. Recoveries of As added to sugar samples varied from 86 to 98%. The accuracy was checked in the direct analysis of eight sugar samples. A paired t-test showed that the results were in agreement at the 95% confidence level with those obtained for acid-digested sugar samples by GFAAS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AgSIE was used for the direct analysis of folic acid (FA), with a detection limit and lower level of quantitation of 6.8 x10-10 mol L-1 and 2.3 x 10 8 mol L-1. The analysis in fresh and processed fruits was done without any sample pretreatment. In strawberry and acerola juices, FA concentration level values were below the method detection limit. FA was detectable in peach (77.7 0.4 mg L-1 and 64.4 0.5 mgL-1), Persian lime (45.4 0.7 mg L-1), pineapple Hawaii (66.2 0.4 mgL-1), pear pineapple (35.3 0.6 mgL-1), cashew (54.4 0.5 mgL-1) , passion fruit (73.2 0.3 mgL-1), and apple (84.4 0.5 mg L-1 ).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reversed-phase high performance liquid chromatographic methods for the analysis of Haloacetic acids have been developed and compared to conventional direct detection methods. Haloacetic acids commonly found in drinking water, including monochloro-, dichloro-, bromo-, iodo- and trichloroacetic acids- have been studied. The ion pairing agent benzyltributylammonium ion was studied in detail using indirect UV and indirect fluorescence detection. Five different competing ions were evaluated to decrease analysis times and lower the detection limit by this new method. The direct detection method utilized an ammonium sulfate buffer and UV detection yielding a detection limit of 100 ppb. The indirect method developed has the advantage of being able to simultaneously analyze UV and non-UV absorbing ions and molecules but requires long equilibration times and demonstrated lower sensitivity than the direct method. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, desorption/ionization mass spectrometry was employed for the analysis of sugars and small platform chemicals that are common intermediates in biomass transformation reactions. Specifically, matrix-assisted laser desorption/ionization (MALDI) and desorption electrospray ionization (DESI) mass spectrometric techniques were employed as alternatives to traditional chromatographic methods. Ionic liquid matrices (ILMs) were designed based on traditional solid MALDI matrices (2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (CHCA)) and 1,3-dialkylimidazolium ionic liquids ([BMIM]Cl, [EMIM]Cl, and [EMIM]OAc) that have been employed as reaction media for biomass transformation reactions such as the conversion of carbohydrates to valuable platform chemicals. Although two new ILMs were synthesized ([EMIM][DHB] and [EMIM][CHCA] from [EMIM]OAc), chloride-containing ILs did not react with matrices and resulted in mixtures of IL and matrix in solution. Compared to the parent solid matrices, much less matrix interference was observed in the low mass region of the mass spectrum (< 500 Da) using each of the IL-matrices. Furthermore, the formation of a true ILM (i.e. a new ion pair) does not appear to be necessary for analyte ionization. MALDI sample preparation techniques were optimized based on the compatibility with analyte, IL and matrix. ILMs and IL-matrix mixtures of DHB allowed for qualitative analysis of glucose, fructose, sucrose and N-acetyl-D-glucosamine. Analogous CHCA-containing ILMs did not result in appreciable analyte signals under similar conditions. Small platform compounds such as 5-hydroxymethylfurfural (HMF) and levulinic acid were not detected by direct analysis using MALDI-MS. Furthermore, sugar analyte signals were only detected at relatively high matrix:IL:analyte ratios (1:1:1) due to significant matrix and analyte suppression by the IL ions. Therefore, chemical modification of analytes with glycidyltrimethylammonium chloride (GTMA) was employed to extend this method to quantitative applications. Derivatization was accomplished in aqueous IL solutions with fair reaction efficiencies (36.9 – 48.4 % glucose conversion). Calibration curves of derivatized glucose-GTMA yielded good linearity in all solvent systems tested, with decreased % RSDs of analyte ion signals in IL solutions as compared to purely aqueous systems (1.2 – 7.2 % and 4.2 – 8.7 %, respectively). Derivatization resulted in a substantial increase in sensitivity for MALDI-MS analyses: glucose was reliably detected at IL:analyte ratios of 100:1 (as compared to 1:1 prior to derivatization). Screening of all test analytes resulted in appreciable analyte signals in MALDI-MS spectra, including both HMF and levulinic acid. Using appropriate internal standards, calibration curves were constructed and this method was employed for monitoring a model dehydration reaction of fructose to HMF in [BMIM]Cl. Calibration curves showed wide dynamic ranges (LOD – 100 ng fructose/μg [BMIM]Cl, LOD – 75 ng HMF/μg [BMIM]Cl) with correlation coefficients of 0.9973 (fructose) and 0.9931 (HMF). LODs were estimated from the calibration data to be 7.2 ng fructose/μg [BMIM]Cl and 7.5 ng HMF/μg [BMIM]Cl, however relatively high S/N ratios at these concentrations indicate that these values are likely overestimated. Application of this method allowed for the rapid acquisition of quantitative data without the need for prior separation of analyte and IL. Finally, small molecule platform chemicals HMF and levulinic acid were qualitatively analyzed by DESI-MS. Both HMF and levulinic acid were easily ionized and the corresponding molecular ions were easily detected in the presence of 10 – 100 times IL, without the need for chemical modification prior to analysis. DESI-MS analysis of ILs in positive and negative ion modes resulted in few ions in the low mass region, showing great potential for the analysis of small molecules in IL media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ancient starch analysis is a microbotanical method in which starch granules are extracted from archaeological residues and the botanical source is identified. The method is an important addition to established palaeoethnobotanical research, as it can reveal ancient microremains of starchy staples such as cereal grains and seeds. In addition, starch analysis can detect starch originating from underground storage organs, which are rarely discovered using other methods. Because starch is tolerant of acidic soils, unlike most organic matter, starch analysis can be successful in northern boreal regions. Starch analysis has potential in the study of cultivation, plant domestication, wild plant usage and tool function, as well as in locating activity areas at sites and discovering human impact on the environment. The aim of this study was to experiment with the starch analysis method in Finnish and Estonian archaeology by building a starch reference collection from cultivated and native plant species, by developing sampling, measuring and analysis protocols, by extracting starch residues from archaeological artefacts and soils, and by identifying their origin. The purpose of this experiment was to evaluate the suitability of the method for the study of subsistence strategies in prehistoric Finland and Estonia. A total of 64 archaeological samples were analysed from four Late Neolithic sites in Finland and Estonia, with radiocarbon dates ranging between 2904 calBC and 1770 calBC. The samples yielded starch granules, which were compared with the starch reference collection and descriptions in the literature. Cereal-type starch was identified from the Finnish Kiukainen culture site and from the Estonian Corded Ware site. The samples from the Finnish Corded Ware site yielded underground storage organ starch, which may be the first evidence of the use of rhizomes as food in Finland. No cereal-type starch was observed. Although the sample sets were limited, the experiment confirmed that starch granules have been preserved well in the archaeological material of Finland and Estonia, and that differences between subsistence patterns, as well as evidence of cultivation and wild plant gathering, can be discovered using starch analysis. By collecting large sample sets and addressing the three most important issues – preventing contamination, collecting adequate references and understanding taphonomic processes – starch analysis can substantially contribute to research on ancient subsistence in Finland and Estonia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliable budget/cost estimates for road maintenance and rehabilitation are subjected to uncertainties and variability in road asset condition and characteristics of road users. The CRC CI research project 2003-029-C ‘Maintenance Cost Prediction for Road’ developed a method for assessing variation and reliability in budget/cost estimates for road maintenance and rehabilitation. The method is based on probability-based reliable theory and statistical method. The next stage of the current project is to apply the developed method to predict maintenance/rehabilitation budgets/costs of large networks for strategic investment. The first task is to assess the variability of road data. This report presents initial results of the analysis in assessing the variability of road data. A case study of the analysis for dry non reactive soil is presented to demonstrate the concept in analysing the variability of road data for large road networks. In assessing the variability of road data, large road networks were categorised into categories with common characteristics according to soil and climatic conditions, pavement conditions, pavement types, surface types and annual average daily traffic. The probability distributions, statistical means, and standard deviation values of asset conditions and annual average daily traffic for each type were quantified. The probability distributions and the statistical information obtained in this analysis will be used to asset the variation and reliability in budget/cost estimates in later stage. Generally, we usually used mean values of asset data of each category as input values for investment analysis. The variability of asset data in each category is not taken into account. This analysis method demonstrated that it can be used for practical application taking into account the variability of road data in analysing large road networks for maintenance/rehabilitation investment analysis.