903 resultados para operational modal analysis
Resumo:
The purpose of this study was to synthesize the operational definition of education through an exploratory analysis of John Dewey’s writings. Dewey’s definition of education changed from 1938 to 1896. Findings suggest that schools promote more social and emotional learning through instructional activities such as service-learning.
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. ^ Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. ^ The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures—cash in advance and documentary credit—have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.^
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures -- cash in advance and documentary credit -- have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.
Resumo:
Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).
Resumo:
Purpose: This paper aims to explore the role of internal and external knowledgebased linkages across the supply chain in achieving better operational performance. It investigates how knowledge is accumulated, shared, and applied to create organization-specific knowledge resources that increase and sustain the organization's competitive advantage. Design/methodology/approach: This paper uses a single case study with multiple, embedded units of analysis, and the social network analysis (SNA) to demonstrate the impact of internal and external knowledge-based linkages across multiple tiers in the supply chain on the organizational operational performance. The focal company of the case study is an Italian manufacturer supplying rubber components to European automotive enterprises. Findings: With the aid of the SNA, the internal knowledge-based linkages can be mapped and visualized. We found that the most central nodes having the most connections with other nodes in the linkages are the most crucial members in terms of knowledge exploration and exploitation within the organization. We also revealed that the effective management of external knowledge-based linkages, such as buyer company, competitors, university, suppliers, and subcontractors, can help improve the operational performance. Research limitations/implications: First, our hypothesis was tested on a single case. The analysis of multiple case studies using SNA would provide a deeper understanding of the relationship between the knowledge-based linkages at all levels of the supply chain and the integration of knowledge. Second, the static nature of knowledge flows was studied in this research. Future research could also consider ongoing monitoring of dynamic linkages and the dynamic characteristic of knowledge flows. Originality/value: To the best of our knowledge, the phrase 'knowledge-based linkages' has not been used in the literature and there is lack of investigation on the relationship between the management of internal and external knowledge-based linkages and the operational performance. To bridge the knowledge gap, this paper will show the importance of understanding the composition and characteristics of knowledge-based linkages and their knowledge nodes. In addition, this paper will show that effective management of knowledge-based linkages leads to the creation of new knowledge and improves organizations' operational performance.
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.
Resumo:
Two years of harmonized aerosol number size distribution data from 24 European field monitoring sites have been analysed. The results give a comprehensive overview of the European near surface aerosol particle number concentrations and number size distributions between 30 and 500 nm of dry particle diameter. Spatial and temporal distribution of aerosols in the particle sizes most important for climate applications are presented. We also analyse the annual, weekly and diurnal cycles of the aerosol number concentrations, provide log-normal fitting parameters for median number size distributions, and give guidance notes for data users. Emphasis is placed on the usability of results within the aerosol modelling community. We also show that the aerosol number concentrations of Aitken and accumulation mode particles (with 100 nm dry diameter as a cut-off between modes) are related, although there is significant variation in the ratios of the modal number concentrations. Different aerosol and station types are distinguished from this data and this methodology has potential for further categorization of stations aerosol number size distribution types. The European submicron aerosol was divided into characteristic types: Central European aerosol, characterized by single mode median size distributions, unimodal number concentration histograms and low variability in CCN-sized aerosol number concentrations; Nordic aerosol with low number concentrations, although showing pronounced seasonal variation of especially Aitken mode particles; Mountain sites (altitude over 1000 m a.s.l.) with a strong seasonal cycle in aerosol number concentrations, high variability, and very low median number concentrations. Southern and Western European regions had fewer stations, which decreases the regional coverage of these results. Aerosol number concentrations over the Britain and Ireland had very high variance and there are indications of mixed air masses from several source regions; the Mediterranean aerosol exhibit high seasonality, and a strong accumulation mode in the summer. The greatest concentrations were observed at the Ispra station in Northern Italy with high accumulation mode number concentrations in the winter. The aerosol number concentrations at the Arctic station Zeppelin in Ny-Ålesund in Svalbard have also a strong seasonal cycle, with greater concentrations of accumulation mode particles in winter, and dominating summer Aitken mode indicating more recently formed particles. Observed particles did not show any statistically significant regional work-week or weekday related variation in number concentrations studied. Analysis products are made for open-access to the research community, available in a freely accessible internet site. The results give to the modelling community a reliable, easy-to-use and freely available comparison dataset of aerosol size distributions.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.
Resumo:
Despite its huge potential in risk analysis, the Dempster–Shafer Theory of Evidence (DST) has not received enough attention in construction management. This paper presents a DST-based approach for structuring personal experience and professional judgment when assessing construction project risk. DST was innovatively used to tackle the problem of lacking sufficient information through enabling analysts to provide incomplete assessments. Risk cost is used as a common scale for measuring risk impact on the various project objectives, and the Evidential Reasoning algorithm is suggested as a novel alternative for aggregating individual assessments. A spreadsheet-based decision support system (DSS) was devised to facilitate the proposed approach. Four case studies were conducted to examine the approach's viability. Senior managers in four British construction companies tried the DSS and gave very promising feedback. The paper concludes that the proposed methodology may contribute to bridging the gap between theory and practice of construction risk assessment.
Resumo:
Failure analysis has been, throughout the years, a fundamental tool used in the aerospace sector, supporting assessments performed by sustainment and design engineers mainly related to failure modes and material suitability. The predicted service life of aircrafts often exceeds 40 years, and the design assured life rarely accounts for all in service loads and in service environmental menaces that aging aircrafts must deal with throughout their service lives. From the most conservative safe-life conceptual design approaches to the most recent on-condition based design approaches, assessing the condition and predicting the failure modes of components and materials are essential for the development of adequate preventive and corrective maintenance actions as well as for the accomplishment and optimization of scheduled maintenance programs of aircrafts. Moreover, as the operational conditions of aircrafts may vary significantly from operator to operator (especially in military aircraft), it is necessary to access if the defined maintenance programs are adequate to guarantee the continuous reliability and safe usage of the aircrafts, preventing catastrophic failures which bear significant maintenance and repair costs, and that may lead to the loss of human lives. Thus being, failure analysis and material investigations performed as part of aircraft accidents and incidents investigations arise as powerful tools of the utmost importance for safety assurance and cost reduction within the aeronautical and aerospace sectors. The Portuguese Air Force (PRTAF) has operated different aircrafts throughout its long existence, and in some cases, has operated a particular type of aircraft for more than 30 years, gathering a great amount of expertise in: assessing failure modes of the aircrafts materials; conducting aircrafts accidents and incidents investigations (sometimes with the participation of the aircraft manufacturers and/or other operators); and in the development of design and repair solutions for in-service related problems. This paper addresses several studies to support the thesis that failure analysis plays a key role in flight safety improvement within the PRTAF. It presents a short summary of developed
Resumo:
Ecological network analysis was applied in the Seine estuary ecosystem, northern France, integrating ecological data from the years 1996 to 2002. The Ecopath with Ecosim (EwE) approach was used to model the trophic flows in 6 spatial compartments leading to 6 distinct EwE models: the navigation channel and the two channel flanks in the estuary proper, and 3 marine habitats in the eastern Seine Bay. Each model included 12 consumer groups, 2 primary producers, and one detritus group. Ecological network analysis was performed, including a set of indices, keystoneness, and trophic spectrum analysis to describe the contribution of the 6 habitats to the Seine estuary ecosystem functioning. Results showed that the two habitats with a functioning most related to a stressed state were the northern and central navigation channels, where building works and constant maritime traffic are considered major anthropogenic stressors. The strong top-down control highlighted in the other 4 habitats was not present in the central channel, showing instead (i) a change in keystone roles in the ecosystem towards sediment-based, lower trophic levels, and (ii) a higher system omnivory. The southern channel evidenced the highest system activity (total system throughput), the higher trophic specialisation (low system omnivory), and the lowest indication of stress (low cycling and relative redundancy). Marine habitats showed higher fish biomass proportions and higher transfer efficiencies per trophic levels than the estuarine habitats, with a transition area between the two that presented intermediate ecosystem structure. The modelling of separate habitats permitted disclosing each one's response to the different pressures, based on their a priori knowledge. Network indices, although non-monotonously, responded to these differences and seem a promising operational tool to define the ecological status of transitional water ecosystems.
Resumo:
The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.