872 resultados para Hydrologic Modeling Catchment and Runoff Computations
Resumo:
Poor compliance with speed limits is a serious safety concern in work zones. Most studies of work zone speeds have focused on descriptive analyses and statistical testing without systematically capturing the effects of vehicle and traffic characteristics. Consequently, little is known about how the characteristics of surrounding traffic and platoons influence speeds. This paper develops a Tobit regression technique for innovatively modeling the probability and the magnitude of non-compliance with speed limits at various locations in work zones. Speed data is transformed into two groups—continuous for non-compliant and left-censored for compliant drivers—to model in a Tobit model framework. The modeling technique is illustrated using speed data from three long-term highway work zones in Queensland, Australia. Consistent and plausible model estimates across the three work zones support the appropriateness and validity of the technique. The results show that the probability and magnitude of speeding was higher for leaders of platoons with larger front gaps, during late afternoon and early morning, when traffic volumes were higher, and when higher proportions of surrounding vehicles were non-compliant. Light vehicles and their followers were also more likely to speed than others. Speeding was more common and greater in magnitude upstream than in the activity area, with higher compliance rates close to the end of the activity area and close to stop/slow traffic controllers. The modeling technique and results have great potential to assist in deployment of appropriate countermeasures by better identifying the traffic characteristics associated with speeding and the locations of lower compliance.
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
In a tag-based recommender system, the multi-dimensional
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Nanotubes and nanosheets are low-dimensional nanomaterials with unique properties that can be exploited for numerous applications. This book offers a complete overview of their structure, properties, development, modeling approaches, and practical use. It focuses attention on boron nitride (BN) nanotubes, which have had major interest given their special high-temperature properties, as well as graphene nanosheets, BN nanosheets, and metal oxide nanosheets. Key topics include surface functionalization of nanotubes for composite applications, wetting property changes for biocompatible environments, and graphene for energy storage applications
Resumo:
Increasing threat of terrorism highlights the importance of enhancing the resilience of underground tunnels to all hazards. This paper develops, applies and compares the Arbitrary Lagrangian Eulerian (ALE) and Smooth Particle Hydrodynamics (SPH) techniques to treat the response of buried tunnels to surface explosions. The results and outcomes of the two techniques were compared, along with results from existing test data. The comparison shows that the ALE technique is a better method for describing the tunnel response for above ground explosion with regards to modeling accuracy and computational efficiency. The ALE technique was then applied to treat the blast response of different types of segmented bored tunnels buried in dry sand. Results indicate that the most used modern ring type segmented tunnels were more flexible for in-plane response, however, they suffered permanent drifts between the rings. Hexagonal segmented tunnels responded with negligible drifts in the longitudinal direction, but the magnitudes of in-plane drifts were large and hence hazardous for the tunnel. Interlocking segmented tunnels suffered from permanent drifts in both the longitudinal and transverse directions. Multi-surface radial joints in both the hexagonal and interlocking segments affected the flexibility of the tunnel in the transverse direction. The findings offer significant new information in the behavior of segmented bored tunnels to guide their future implementation in civil engineering applications.
Resumo:
In early stages of design and modeling, computers and computer applications are often considered an obstacle, rather than a facilitator of the process. Most notably, brainstorms, process modeling with business experts, or development planning, are often performed by a team in front of a whiteboard. While "whiteboarding" is recognized as an effective tool, low-tech solutions that allow remote participants to contribute are still not generally available. This is a striking observation, considering that vast majority of teams in large organizations are distributed teams. And this has also been one of the key triggers behind the project described in this article, where a team of corporate researchers decided to identify state of the art technologies that could facilitate the scenario mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. © 2012 Springer-Verlag.
Resumo:
Since their inception in 1962, Petri nets have been used in a wide variety of application domains. Although Petri nets are graphical and easy to understand, they have formal semantics and allow for analysis techniques ranging from model checking and structural analysis to process mining and performance analysis. Over time Petri nets emerged as a solid foundation for Business Process Management (BPM) research. The BPM discipline develops methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. Mainstream business process modeling notations and workflow management systems are using token-based semantics borrowed from Petri nets. Moreover, state-of-the-art BPM analysis techniques are using Petri nets as an internal representation. Users of BPM methods and tools are often not aware of this. This paper aims to unveil the seminal role of Petri nets in BPM.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
Grazing for Healthy Coastal Wetlands has been developed to provide graziers, landowners and extension officers with information on managing grazing in and around Queensland’s coastal wetlands to maintain healthy coastal wetlands and productive grazing enterprises. It provides practical advice on how grazing and associated land management practices can be implemented to support the long-term health of coastal wetlands whilst maintaining production. The guidelines have been compiled from published literature, grazier knowledge, wetlands managers and the experience of extension and natural resource management professionals. They reflect the current knowledge of suitable management practices for coastal wetlands. They are designed to complement and be considered in conjunction with existing information resources including the EDGEnetwork Grazing Land Management series and best management practice guidelines from regional Natural Resource Management (NRM) groups. While the recommendations apply broadly to Queensland’s coastal wetlands, regional, catchment and landscape-scale variations in wetland characteristics and the objectives of the individual grazing enterprise should be taken into account in planning and deciding management actions for wetlands. An individual grazing property may even have a range of wetland types with different management needs and objectives which should be identified during whole of property planning. Specific land and wetland management advice should also be sought from local grazing extension officers and NRM professionals.
Resumo:
The present work provides a regional-scale assessment of the changes in acidifying deposition in Finland over the past 30 years and the current pattern in the recovery of acid-sensitive lakes from acidification in relation to changes in sulphate deposition. This information is needed for documenting the ecosystem benefits of costly emission reduction policies and further actions in air pollution policy. The development of sulphate deposition in Finland reflects that of European SO2 emissions. Before the 1990s, reductions in sulphur emissions in Europe had been relatively small and sulphate deposition showed no consistent trends. Due to emission reduction measures that were then taken, sulphate deposition started to clearly decline from the late 1980s. The bulk deposition of sulphate has declined 40-60% in most parts of the country during 1990-2003. The decline in sulphate deposition exceeded the decline of base cation deposition, which resulted in a decrease in acidity and acidifying potential of deposition over the 1990s. Nitrogen deposition also decreased since the late 1980s, but less than that of sulphate, and levelling off during the 1990s. Sulphate concentrations in all types of small lakes throughout Finland have declined from the early 1990s. The relative decrease in lake sulphate concentrations (average 40-50%) during 1990-2003 was rather similar to the decline in sulphate deposition, indicating a direct response to the reduction in deposition. There are presently no indications of elevated nitrate concentrations in forested headwater lakes. Base cation concentrations are still declining in many lakes, especially in south Finland, but to a lesser extent than sulphate allowing buffering capacity (alkalinity) to increase, being significant in 60% of the study lakes. Chemical recovery is resulting in biological recovery with populations of acid-sensitive fish species increasing. The recovery has been strongest in lakes in which sulphate has been the major acidifying agent, and recovery has been the strongest and most consistent in lakes in south Finland. The recovery of lakes in central Finland and north Finland is not as widespread and strong as observed in south. Many catchments, particularly in central Finland, have a high proportion of peatlands and therefore high TOC concentrations in lakes, and runoff-induced surges of organic acids have been an important confounding factor suppressing the recovery of pH and alkalinity in these lakes. Chemical recovery is progressing even in the most acidified lakes, but the buffering capacity of many lakes is still low and still sensitive to acidic input. Further reduction in sulphur emissions are needed for the alkalinity to increase in the acidified lakes. Increasing total organic carbon (TOC) concentrations are indicated in small forest lakes in Finland. The trends appear to be related to decreasing sulphate deposition and improved acid-base status of the soil, and the rise in TOC is integral to recovery from acidification. A new challenge is climate change with potential trends in temperature, precipitation and runoff, which are expected to affect future chemical and biological recovery from acidification. The potential impact on the mobilization and leaching of organic acids may become particularly important in Finnish conditions. Long-term environmental monitoring has evidently shown the success of international emission abatement strategies. The importance and value of integrated monitoring approach including physical, chemical and biological variables is clearly indicated, and continuous environmental monitoring is needed as a scientific basis for further actions in air pollution policy.
Resumo:
Climate is warming and it is especially seen in arctic areas, where the warming trend is expected to be greatest. Arctic freshwater ecosystems, which are a very characteristic feature of the arctic landscape, are especially sensitive to climate change. They could be used as early warning systems, but more information about the ecosystem functioning and responses are needed for proper interpretation of the observations. Phytoplankton species and assemblages could be especially suitable for climate-related studies, since they have short generation times and react rapidly to changes in the environment. In addition, phytoplankton provides a good tool for lake classifications, since different species have different requirements and tolerance ranges for various environmental factors. The use of biological indicators is especially useful in arctic areas, were many of the chemical factors commonly fall under the detection limit and therefore do not provide much information about the environment. This work brings new information about species distribution and dynamics of arctic freshwater phytoplankton in relation to environmental factors. The phytoplankton of lakes in Finnish Lapland and other European high-altitude or high-latitude areas were compared. Most lakes were oligotrophic and dominated by flagellated species belonging to chrysophytes, cryptophytes and dinoflagellates. In Finnish Lapland cryptophytes were of less importance, whereas desmids had high species richness in many of the lakes. In Pan-European scale, geographical and catchment-related factors were explaining most of the differences in species distributions between different districts, whereas lake water chemistry (especially conductivity, SiO2 and pH) was most important regionally. Seasonal and interannual variation of phytoplankton was studied in subarctic Lake Saanajärvi. Characteristic phytoplankton species in this oligotrophic, dimictic lake belonged mainly to chrysophytes and diatoms. The maximum phytoplankton biomass in Lake Saanajärvi occurs during autumn, while spring biomass is very low. During years with heavy snow cover the lake suffers from pH drop caused by melt waters, but the effects of this acid pulse are restricted to surface layers and last for a relatively short period. In addition to some chemical parameters (mainly Ca and nutrients), length of the mixing cycle and physical factors such as lake water temperature and thermal stability of water column had major impact on phytoplankton dynamics. During a year with long and strong thermal stability, the phytoplankton community developed towards an equilibrium state, with heavy dominance of only a few taxa for a longer period of time. During a year with higher windiness and less thermal stability, the species composition was more diverse and species with different functional strategies were able to occur simultaneously. The results of this work indicate that although arctic lakes in general share many common features concerning their catchment and water chemistry, large differences in biological features can be found even in a relatively small area. Most likely the lakes with very different algal flora do not respond in a similar way to differences in the environmental factors, and more information about specific arctic lake types is needed. The results also show considerable year to year differences in phytoplankton species distribution and dynamics, and these changes are most likely linked to climatic factors.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
A new fault-tolerant multi-transputer architecture capable of tolerating failure of any one component in the system is described. In the proposed architecture the processing nodes are automatically reconfigured in the event of a fault and the computations continue from the stage where the fault occurred. The process of reconfiguration is transparent to the user, and the identity of the failed component is communicated to the user along with the results of computations. Parallel solution of a typical engineering problem involving solution of Laplace's equation by the boundary element method has been implemented. The performance of the architecture in the event of faults has been investigated.
Resumo:
Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception