142 resultados para Hierarchical sampling
Resumo:
Navigational collisions are one of the major safety concerns in many seaports. Despite the extent of recent works done on port navigational safety research, little is known about harbor pilot’s perception of collision risks in port fairways. This paper uses a hierarchical ordered probit model to investigate associations between perceived risks and the geometric and traffic characteristics of fairways and the pilot attributes. Perceived risk data, collected through a risk perception survey conducted among the Singapore port pilots, are used to calibrate the model. Intra-class correlation coefficient justifies use of the hierarchical model in comparison with an ordinary model. Results show higher perceived risks in fairways attached to anchorages, and in those featuring sharper bends and higher traffic operating speeds. Lesser risks are perceived in fairways attached to shoreline and confined waters, and in those with one-way traffic, traffic separation scheme, cardinal marks and isolated danger marks. Risk is also found to be perceived higher in night.
Resumo:
This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.
Resumo:
Most crash severity studies ignored severity correlations between driver-vehicle units involved in the same crashes. Models without accounting for these within-crash correlations will result in biased estimates in the factor effects. This study developed a Bayesian hierarchical binomial logistic model to identify the significant factors affecting the severity level of driver injury and vehicle damage in traffic crashes at signalized intersections. Crash data in Singapore were employed to calibrate the model. Model fitness assessment and comparison using Intra-class Correlation Coefficient (ICC) and Deviance Information Criterion (DIC) ensured the suitability of introducing the crash-level random effects. Crashes occurring in peak time, in good street lighting condition, involving pedestrian injuries are associated with a lower severity, while those in night time, at T/Y type intersections, on right-most lane, and installed with red light camera have larger odds of being severe. Moreover, heavy vehicles have a better resistance on severe crash, while crashes involving two-wheel vehicles, young or aged drivers, and the involvement of offending party are more likely to result in severe injuries.
Resumo:
Motorcycles are overrepresented in road traffic crashes and particularly vulnerable at signalized intersections. The objective of this study is to identify causal factors affecting the motorcycle crashes at both four-legged and T signalized intersections. Treating the data in time-series cross-section panels, this study explores different Hierarchical Poisson models and found that the model allowing autoregressive lag 1 dependent specification in the error term is the most suitable. Results show that the number of lanes at the four-legged signalized intersections significantly increases motorcycle crashes largely because of the higher exposure resulting from higher motorcycle accumulation at the stop line. Furthermore, the presence of a wide median and an uncontrolled left-turn lane at major roadways of four-legged intersections exacerbate this potential hazard. For T signalized intersections, the presence of exclusive right-turn lane at both major and minor roadways and an uncontrolled left-turn lane at major roadways of T intersections increases motorcycle crashes. Motorcycle crashes increase on high-speed roadways because they are more vulnerable and less likely to react in time during conflicts. The presence of red light cameras reduces motorcycle crashes significantly for both four-legged and T intersections. With the red-light camera, motorcycles are less exposed to conflicts because it is observed that they are more disciplined in queuing at the stop line and less likely to jump start at the start of green.
Resumo:
Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.
Resumo:
This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.
Resumo:
Navigational collisions are a major safety concern in many seaports. Despite the recent advances in port navigational safety research, little is known about harbor pilot’s perception of collision risks in anchorages. This study attempts to model such risks by employing a hierarchical ordered probit model, which is calibrated by using data collected through a risk perception survey conducted on Singapore port pilots. The hierarchical model is found to be useful to account for correlations in risks perceived by individual pilots. Results show higher perceived risks in anchorages attached to intersection, local and international fairway; becoming more critical at night. Lesser risks are perceived in anchorages featuring shoreline in boundary, higher water depth, lower density of stationary ships, cardinal marks and isolated danger marks. Pilotage experience shows a negative effect on perceived risks. This study indicates that hierarchical modeling would be useful for treating correlations in navigational safety data.
Resumo:
Baseline monitoring of groundwater quality aims to characterize the ambient condition of the resource and identify spatial or temporal trends. Sites comprising any baseline monitoring network must be selected to provide a representative perspective of groundwater quality across the aquifer(s) of interest. Hierarchical cluster analysis (HCA) has been used as a means of assessing the representativeness of a groundwater quality monitoring network, using example datasets from New Zealand. HCA allows New Zealand's national and regional monitoring networks to be compared in terms of the number of water-quality categories identified in each network, the hydrochemistry at the centroids of these water-quality categories, the proportions of monitoring sites assigned to each water-quality category, and the range of concentrations for each analyte within each water-quality category. Through the HCA approach, the National Groundwater Monitoring Programme (117 sites) is shown to provide a highly representative perspective of groundwater quality across New Zealand, relative to the amalgamated regional monitoring networks operated by 15 different regional authorities (680 sites have sufficient data for inclusion in HCA). This methodology can be applied to evaluate the representativeness of any subset of monitoring sites taken from a larger network.
Resumo:
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.
Resumo:
Secrecy of decryption keys is an important pre-requisite for security of any encryption scheme and compromised private keys must be immediately replaced. \emph{Forward Security (FS)}, introduced to Public Key Encryption (PKE) by Canetti, Halevi, and Katz (Eurocrypt 2003), reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. The FS property was also shown to be achievable in (Hierarchical) Identity-Based Encryption (HIBE) by Yao, Fazio, Dodis, and Lysyanskaya (ACM CCS 2004). Yet, for emerging encryption techniques, offering flexible access control to encrypted data, by means of functional relationships between ciphertexts and decryption keys, FS protection was not known to exist.\smallskip In this paper we introduce FS to the powerful setting of \emph{Hierarchical Predicate Encryption (HPE)}, proposed by Okamoto and Takashima (Asiacrypt 2009). Anticipated applications of FS-HPE schemes can be found in searchable encryption and in fully private communication. Considering the dependencies amongst the concepts, our FS-HPE scheme implies forward-secure flavors of Predicate Encryption and (Hierarchical) Attribute-Based Encryption.\smallskip Our FS-HPE scheme guarantees forward security for plaintexts and for attributes that are hidden in HPE ciphertexts. It further allows delegation of decrypting abilities at any point in time, independent of FS time evolution. It realizes zero-inner-product predicates and is proven adaptively secure under standard assumptions. As the ``cross-product" approach taken in FS-HIBE is not directly applicable to the HPE setting, our construction resorts to techniques that are specific to existing HPE schemes and extends them with what can be seen as a reminiscent of binary tree encryption from FS-PKE.
Resumo:
Effective, statistically robust sampling and surveillance strategies form an integral component of large agricultural industries such as the grains industry. Intensive in-storage sampling is essential for pest detection, Integrated Pest Management (IPM), to determine grain quality and to satisfy importing nation’s biosecurity concerns, while surveillance over broad geographic regions ensures that biosecurity risks can be excluded, monitored, eradicated or contained within an area. In the grains industry, a number of qualitative and quantitative methodologies for surveillance and in-storage sampling have been considered. Primarily, research has focussed on developing statistical methodologies for in storage sampling strategies concentrating on detection of pest insects within a grain bulk, however, the need for effective and statistically defensible surveillance strategies has also been recognised. Interestingly, although surveillance and in storage sampling have typically been considered independently, many techniques and concepts are common between the two fields of research. This review aims to consider the development of statistically based in storage sampling and surveillance strategies and to identify methods that may be useful for both surveillance and in storage sampling. We discuss the utility of new quantitative and qualitative approaches, such as Bayesian statistics, fault trees and more traditional probabilistic methods and show how these methods may be used in both surveillance and in storage sampling systems.
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.