987 resultados para Threshold Limit Values
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
In the temperature range 200-400 degree C the Ni-base superalloy, N901, develops marked dynamic strain ageing effects in its tensile behavior. These include inverse strain rate sensitivity, especially in UTS values, strongly serrated stress-strain curves and a heavily sheared failure mode at the higher test-temperatures. As for steels these effects seem to be due to interactions between the dislocations and the interstitial carbon atoms present. The results of tensile and fatigue threshold tests carried out between 20 degree C and 420 degree C are reported and the fatigue behavior is discussed in terms of the effects of surface roughness induced closure, temperature and strain aging interactions.
Resumo:
The fatigue crack propagation behaviour of a low alloy, boron-containing steel has been examined after austenitizing at 900°C or 1250°C and tempering at a range of temperatures up to 400°C. Fatigue threshold values were found to vary with austenitizing and tempering treatment in a range between 3.3 to 6 MPa √m when tested at a stress ratio (R) of 0.2. Crack propagation rates in the Paris regime were insensitive to heat treatment variations. The crack propagation path was essentially transgranular in all conditions with small regions of intergranular facets appearing at growth rates around the knee of the da/dN vs ΔK curve. The crack front shape showed marked retardation in the centre of the specimen at low tempering temperatures. Experimental determinations and computer predictions of residual stress levels in the specimens indicated that this was due to a central residual compressive stress resulting from differential cooling rates and the volume change associated with the martensite transformation. The results are discussed in terms of microstructural and residual stress effects on fatigue behaviour. © 1987.
Resumo:
Fatigue crack growth rate tests have been performed on Nimonic AP1, a powder formed Ni-base superalloy, in air and vacuum at room temperature. These show that threshold values are higher, and near-threshold (faceted) crack growth rates are lower, in vacuum than in air, although at high growth rates, in the “structure-insensitive” regime, R-ratio and a dilute environment have little effect. Changing the R-ratio from 0.1 to 0.5 in vacuum does not alter near-threshold crack growth rates very much, despite more extensive secondary cracking being noticeable at R= 0.5. In vacuum, rewelding occurs at contact points across the crack as ΔK falls. This leads to the production of extensive fracture surface damage and bulky fretting debris, and is thought to be a significant contributory factor to the observed increase in threshold values.
Resumo:
Fatigue crack propagation and threshold data for two Ni-base alloys, Astroloy and Nimonic 901, are reported. At room temperature the effect which altering the load ratio (R-ratio) has on fatigue behaviour is strongly dependent on grain size. In the coarse grained microstructures crack growth rates increase and threshold values decrease markedly as R rises from 0. 1 to 0. 8, whereas only small changes in behaviour occur in fine grained material. In Astroloy, when strength level and gamma grain size are kept constant, there is very little effect of processing route and gamma prime distribution on room temperature threshold and crack propagation results. The dominant microstructural effect on this type of fatigue behaviour is the matrix ( gamma ) grain size itself.
Resumo:
2000 Mathematics Subject Classification: 60G70, 60F05.
Resumo:
In this study, we developed a DEA-based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
Resumo:
A hydrodynamic threshold between Darcian and non-Darcian flow conditions was found to occur in cubes of Key Largo Limestone from Florida, USA (one cube measuring 0.2 m on each side, the other 0.3 m) at an effective porosity of 33% and a hydraulic conductivity of 10 m/day. Below these values, flow was laminar and could be described as Darcian. Above these values, hydraulic conductivity increased greatly and flow was non-laminar. Reynolds numbers (Re) for these experiments ranged from
Resumo:
Variable Speed Limit (VSL) strategies identify and disseminate dynamic speed limits that are determined to be appropriate based on prevailing traffic conditions, road surface conditions, and weather conditions. This dissertation develops and evaluates a shockwave-based VSL system that uses a heuristic switching logic-based controller with specified thresholds of prevailing traffic flow conditions. The system aims to improve operations and mobility at critical bottlenecks. Before traffic breakdown occurrence, the proposed VSL’s goal is to prevent or postpone breakdown by decreasing the inflow and achieving uniform distribution in speed and flow. After breakdown occurrence, the VSL system aims to dampen traffic congestion by reducing the inflow traffic to the congested area and increasing the bottleneck capacity by deactivating the VSL at the head of the congested area. The shockwave-based VSL system pushes the VSL location upstream as the congested area propagates upstream. In addition to testing the system using infrastructure detector-based data, this dissertation investigates the use of Connected Vehicle trajectory data as input to the shockwave-based VSL system performance. Since the field Connected Vehicle data are not available, as part of this research, Vehicle-to-Infrastructure communication is modeled in the microscopic simulation to obtain individual vehicle trajectories. In this system, wavelet transform is used to analyze aggregated individual vehicles’ speed data to determine the locations of congestion. The currently recommended calibration procedures of simulation models are generally based on the capacity, volume and system-performance values and do not specifically examine traffic breakdown characteristics. However, since the proposed VSL strategies are countermeasures to the impacts of breakdown conditions, considering breakdown characteristics in the calibration procedure is important to have a reliable assessment. Several enhancements were proposed in this study to account for the breakdown characteristics at bottleneck locations in the calibration process. In this dissertation, performance of shockwave-based VSL is compared to VSL systems with different fixed VSL message sign locations utilizing the calibrated microscopic model. The results show that shockwave-based VSL outperforms fixed-location VSL systems, and it can considerably decrease the maximum back of queue and duration of breakdown while increasing the average speed during breakdown.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
[EN]Rn has been detected in 28 groundwater samples from the northeast of Gran Canaria (Canary Islands, Spain) utilizing a closed loop system consisting of an AlphaGUARD monitor that measures radon activity concentration in the air by means of an ionization chamber, and an AquaKIT set that transfers dissolved radon in the water samples to the air within the circuit. Radon concentration in the water samples studied varies between 0.3 and 76.9 Bq/L. Spanish radiological protection regulations limit the concentration of 222Rn for drinking water to 100 Bq/L, therefore the values obtained for all the analyzed samples are below this threshold. The hydrogeological study reveals a significant correspondence between the radon activity concentration and the material characteristics of the aquifer.
Resumo:
We present a new method for ecologically sustainable land use planning within multiple land use schemes. Our aims were (1) to develop a method that can be used to locate important areas based on their ecological values; (2) to evaluate the quality, quantity, availability, and usability of existing ecological data sets; and (3) to demonstrate the use of the method in Eastern Finland, where there are requirements for the simultaneous development of nature conservation, tourism, and recreation. We compiled all available ecological data sets from the study area, complemented the missing data using habitat suitability modeling, calculated the total ecological score (TES) for each 1 ha grid cell in the study area, and finally, demonstrated the use of TES in assessing the success of nature conservation in covering ecologically valuable areas and locating ecologically sustainable areas for tourism and recreational infrastructure. The method operated quite well at the level required for regional and local scale planning. The quality, quantity, availability, and usability of existing data sets were generally high, and they could be further complemented by modeling. There are still constraints that limit the use of the method in practical land use planning. However, as increasing data become available and open access, and modeling tools improve, the usability and applicability of the method will increase.
Resumo:
A growing interest in mapping the social value of ecosystem services (ES) is not yet methodologically aligned with what is actually being mapped. We critically examine aspects of the social value mapping process that might influence map outcomes and limit their practical use in decision making. We rely on an empirical case of participatory mapping, for a single ES (recreation opportunities), which involves diverse stakeholders such as planners, researchers, and community representatives. Value elicitation relied on an individual open-ended interview and a mapping exercise. Interpretation of the narratives and GIS calculations of proximity, centrality, and dispersion helped in exploring the factors driving participants’ answers. Narratives reveal diverse value types. Whereas planners highlighted utilitarian and aesthetic values, the answers from researchers revealed naturalistic values as well. In turn community representatives acknowledged symbolic values. When remitted to the map, these values were constrained to statements toward a much narrower set of features of the physical (e.g., volcanoes) and built landscape (e.g., roads). The results suggest that mapping, as an instrumental approach toward social valuation, may capture only a subset of relevant assigned values. This outcome is the interplay between participants’ characteristics, including their acquaintance with the territory and their ability with maps, and the mapping procedure itself, including the proxies used to represent the ES and the value typology chosen, the elicitation question, the cartographic features displayed on the base map, and the spatial scale.
Resumo:
BACKGROUND: Pretransplant anti-HLA donor-specific antibodies (DSA) are recognized as a risk factor for acute antibody-mediated rejection (AMR) in kidney transplantation. The predictive value of C4d-fixing capability by DSA or of IgG DSA subclasses for acute AMR in the pretransplant setting has been recently studied. In addition DSA strength assessed by mean fluorescence intensity (MFI) may improve risk stratification. We aimed to analyze the relevance of preformed DSA and of DSA MFI values. METHODS: 280 consecutive patients with negative complement-dependent cytotoxicity crossmatches received a kidney transplant between 01/2008 and 03/2014. Sera were screened for the presence of DSA with a solid-phase assays on a Luminex flow analyzer, and the results were correlated with biopsy-proven acute AMR in the first year and survival. RESULTS: Pretransplant anti-HLA antibodies were present in 72 patients (25.7%) and 24 (8.6%) had DSA. There were 46 (16.4%) acute rejection episodes, 32 (11.4%) being cellular and 14 (5.0%) AMR. The incidence of acute AMR was higher in patients with pretransplant DSA (41.7%) than in those without (1.6%) (p<0.001). The median cumulative MFI (cMFI) of the group DSA+/AMR+ was 5680 vs 2208 in DSA+/AMR- (p=0.058). With univariate logistic regression a threshold value of 5280 cMFI was predictive for acute AMR. DSA cMFI's ability to predict AMR was also explored by ROC analysis. AUC was 0.728 and the best threshold was a cMFI of 4340. Importantly pretransplant DSA>5280 cMFI had a detrimental effect on 5-year graft survival. CONCLUSIONS: Preformed DSA cMFI values were clinically-relevant for the prediction of acute AMR and graft survival in kidney transplantation. A threshold of 4300-5300 cMFI was a significant outcome predictor.