847 resultados para Vulnerability curve
Resumo:
Continuing monitoring of diesel engine performance is critical for early detection of fault developments in the engine before they materialize and become a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few non intrusive condition monitoring techniques that can be utilized for such tasks. In this experimental study, IAS analysis was employed to estimate the loading condition of a 4-stroke 4-cylinder diesel engine in a laboratory condition. It was shown that IAS analysis can provide useful information about engine speed variation caused by the changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectrum directly associated with the engine firing frequency (at twice the mean shaft revolution speed) can be utilized to estimate the engine loading condition regardless of whether the engine is operating at normal running conditions or in a simulated faulty injector case. The amplitude of this order component follows a clear exponential curve as the loading condition changes. A mathematical relationship was established for the estimation of the engine power output based on the amplitude of the major order component of the measured IAS spectrum.
Resumo:
Cold-formed steel stud walls are a major component of Light Steel Framing (LSF) building systems used in commercial, industrial and residential buildings. In the conventional LSF stud wall systems, thin steel studs are protected from fire by placing one or two layers of plasterboard on both sides with or without cavity insulation. However, there is very limited data about the structural and thermal performance of stud wall systems while past research showed contradicting results, for example, about the benefits of cavity insulation. This research was therefore conducted to improve the knowledge and understanding of the structural and thermal performance of cold-formed steel stud wall systems (both load bearing and non-load bearing) under fire conditions and to develop new improved stud wall systems including reliable and simple methods to predict their fire resistance rating. Full scale fire tests of cold-formed steel stud wall systems formed the basis of this research. This research proposed an innovative LSF stud wall system in which a composite panel made of two plasterboards with insulation between them was used to improve the fire rating. Hence fire tests included both conventional steel stud walls with and without the use of cavity insulation and the new composite panel system. A propane fired gas furnace was specially designed and constructed first. The furnace was designed to deliver heat in accordance with the standard time temperature curve as proposed by AS 1530.4 (SA, 2005). A compression loading frame capable of loading the individual studs of a full scale steel stud wall system was also designed and built for the load-bearing tests. Fire tests included comprehensive time-temperature measurements across the thickness and along the length of all the specimens using K type thermocouples. They also included the measurements of load-deformation characteristics of stud walls until failure. The first phase of fire tests included 15 small scale fire tests of gypsum plasterboards, and composite panels using different types of insulating material of varying thickness and density. Fire performance of single and multiple layers of gypsum plasterboards was assessed including the effect of interfaces between adjacent plasterboards on the thermal performance. Effects of insulations such as glass fibre, rock fibre and cellulose fibre were also determined while the tests provided important data relating to the temperature at which the fall off of external plasterboards occurred. In the second phase, nine small scale non-load bearing wall specimens were tested to investigate the thermal performance of conventional and innovative steel stud wall systems. Effects of single and multiple layers of plasterboards with and without vertical joints were investigated. The new composite panels were seen to offer greater thermal protection to the studs in comparison to the conventional panels. In the third phase of fire tests, nine full scale load bearing wall specimens were tested to study the thermal and structural performance of the load bearing wall assemblies. A full scale test was also conducted at ambient temperature. These tests showed that the use of cavity insulation led to inferior fire performance of walls, and provided good explanations and supporting research data to overcome the incorrect industry assumptions about cavity insulation. They demonstrated that the use of insulation externally in a composite panel enhanced the thermal and structural performance of stud walls and increased their fire resistance rating significantly. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. This research also included steady state tensile tests at ambient and elevated temperatures to address the lack of reliable mechanical properties for high grade cold-formed steels at elevated temperatures. Suitable predictive equations were developed for calculating the yield strength and elastic modulus at elevated temperatures. In summary, this research has developed comprehensive experimental thermal and structural performance data for both the conventional and the proposed non-load bearing and load bearing stud wall systems under fire conditions. Idealized hot flange temperature profiles have been developed for non-insulated, cavity insulated and externally insulated load bearing wall models along with suitable equations for predicting their failure times. A graphical method has also been proposed to predict the failure times (fire rating) of non-load bearing and load bearing walls under different load ratios. The results from this research are useful to both fire researchers and engineers working in this field. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF walls under fire conditions, and developed an innovative LSF wall system with increased fire rating. It has clearly demonstrated the detrimental effects of using cavity insulation, and has paved the way for Australian building industries to develop new wall panels with increased fire rating for commercial applications worldwide.
Resumo:
This paper describes the vulnerability of masonry under shear; first the mechanisms of in-plane and out-of-plane shear performance of masonry are reviewed; both the unreinforced and lightly reinforced masonry wall systems are considered. Factors affecting the response of unreinforced and reinforced masonry to shear are described and the effect of the variability of those factors to the failure mode of masonry shear walls is also discussed. Some critique is provided on the existing design provisions in various masonry standards.
Resumo:
This article examines, from both within and outside the context of compulsory third party motor vehicle insurance, the different academic and judicial perspectives regarding the relevance of insurance to the imposition of negligence liability via the formulation of legal principle. In particular, the utility of insurance in setting the standard of care held owing by a learner driver to an instructor in Imbree v McNeilly is analysed and the implications of this High Court decision, in light of current jurisprudential argument and for other principles of negligence liability, namely claimant vulnerability, are considered. It concludes that ultimately one’s stance as to the relevance, or otherwise, of insurance to the development of the common law of negligence will be predominately influenced by normative views of torts’ function as an instrument of corrective or distributive justice.
Resumo:
Approximately 20 years have passed now since the NTSB issued its original recommendation to expedite development, certification and production of low-cost proximity warning and conflict detection systems for general aviation [1]. While some systems are in place (TCAS [2]), ¡¨see-and-avoid¡¨ remains the primary means of separation between light aircrafts sharing the national airspace. The requirement for a collision avoidance or sense-and-avoid capability onboard unmanned aircraft has been identified by leading government, industry and regulatory bodies as one of the most significant challenges facing the routine operation of unmanned aerial systems (UAS) in the national airspace system (NAS) [3, 4]. In this thesis, we propose and develop a novel image-based collision avoidance system to detect and avoid an upcoming conflict scenario (with an intruder) without first estimating or filtering range. The proposed collision avoidance system (CAS) uses relative bearing ƒÛ and angular-area subtended ƒê , estimated from an image, to form a test statistic AS C . This test statistic is used in a thresholding technique to decide if a conflict scenario is imminent. If deemed necessary, the system will command the aircraft to perform a manoeuvre based on ƒÛ and constrained by the CAS sensor field-of-view. Through the use of a simulation environment where the UAS is mathematically modelled and a flight controller developed, we show that using Monte Carlo simulations a probability of a Mid Air Collision (MAC) MAC RR or a Near Mid Air Collision (NMAC) RiskRatio can be estimated. We also show the performance gain this system has over a simplified version (bearings-only ƒÛ ). This performance gain is demonstrated in the form of a standard operating characteristic curve. Finally, it is shown that the proposed CAS performs at a level comparable to current manned aviations equivalent level of safety (ELOS) expectations for Class E airspace. In some cases, the CAS may be oversensitive in manoeuvring the owncraft when not necessary, but this constitutes a more conservative and therefore safer, flying procedures in most instances.
Resumo:
Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright © 2010.
Resumo:
Background: This prospective study investigates the use of intraoperative fluoroscopy in 28 consecutive cases undergoing hallux valgus surgery. To our knowledge there have been no studies validating the use of intraoperative fluoroscopy in hallux valgus surgery. Methods: We performed a prospective investigation of 28 consecutive cases undergoing hallux valgus surgery. Fluoroscopic images were examined intraoperatively and any significant unforseen findings documented. A comparison was made between the fluoroscopic images and weight bearing films taken 6 weeks postoperatively to examine whether the intraoperative images are an accurate representation of the standard films obtained post-operatively. We excluded those patients that went on to have an Akin osteotomy. Results: There were no unforeseen intraoperative events that were revealed by the use of fluoroscopy and no surgical modifications were made as a result of the intraoperative images. The intraoperative films were found to be a reliable representation of the postoperative weight bearing films but a small increase in the hallux valgus angle was noted at six weeks and this is thought to be due to stretching of the medial soft tissue repair. Conclusions: Intraoperative fluoroscopy is a reliable technique. This study was performed at a centre which performs approximately 100 hallux valgus operations per year and that should be taken into consideration when reviewing our findings. We conclude that there may be a role for fluoroscopy for surgeons in the early stages of the surgical learning curve and for those that infrequently perform hallux valgus surgery. We cannot however recommend that fluoroscopy be used routinely in hallux valgus surgery.
Resumo:
Just Fast Keying (JFK) is a simple, efficient and secure key exchange protocol proposed by Aiello et al. (ACM TISSEC, 2004). JFK is well known for its novel design features, notably its resistance to denial-of-service (DoS) attacks. Using Meadows’ cost-based framework, we identify a new DoS vulnerability in JFK. The JFK protocol is claimed secure in the Canetti-Krawczyk model under the Decisional Diffie-Hellman (DDH) assumption. We show that security of the JFK protocol, when reusing ephemeral Diffie-Hellman keys, appears to require the Gap Diffie-Hellman (GDH) assumption in the random oracle model. We propose a new variant of JFK that avoids the identified DoS vulnerability and provides perfect forward secrecy even under the DDH assumption, achieving the full security promised by the JFK protocol.
Resumo:
With the growth and development of communication technology there is an increasing need for the use of interception technologies in modern policing. Law enforcement agencies are faced with increasingly sophisticated and complex criminal networks that utilise modern communication technology as a basis for their criminal success. In particular, transnational organised crime (TOC) is a diverse and complicated arena, costing global society in excess of $3 trillion annually, a figure that continues to grow (Borger, 2007) as crime groups take advantage of disappearing borders and greater profit markets. However, whilst communication can be a critical success factor for criminal enterprise it is also a key vulnerability. It is this vulnerability that the use of CIT, such as phone taps or email interception, can exploit. As such, law enforcement agencies now need a method and framework that allows them to utilise CIT to combat these crimes efficiently and successfully. This paper provides a review of current literature with the specific purpose of considering the effectiveness of CIT in the fight against TOC and the groundwork that must be laid in order for it to be fully exploited. In doing so, it fills an important gap in current research, focusing on the practical implementation of CIT as opposed to the traditional area of privacy concerns that arise with intrusive methods of investigation. The findings support the notion that CIT is an essential intelligence gathering tool that has a strong place within the modern policing arena. It identifies that the most effective use of CIT is grounded within a proactive, intelligence‐led framework and concludes that in order for this to happen Australian authorities and law enforcement agencies must re‐evaluate and address the current legislative and operational constraints placed on the use of CIT and the culture that surrounds intelligence in policing.
Resumo:
Barreto-Lynn-Scott (BLS) curves are a stand-out candidate for implementing high-security pairings. This paper shows that particular choices of the pairing-friendly search parameter give rise to four subfami- lies of BLS curves, all of which offer highly efficient and implementation- friendly pairing instantiations. Curves from these particular subfamilies are defined over prime fields that support very efficient towering options for the full extension field. The coefficients for a specific curve and its correct twist are automat-ically determined without any computational effort. The choice of an extremely sparse search parameter is immediately reflected by a highly efficient optimal ate Miller loop and final exponentiation. As a resource for implementors, we give a list with examples of implementation-friendly BLS curves through several high-security levels.
Resumo:
Based on the AFM-bending experiments, a molecular dynamics (MD) bending simulation model is established which could accurately account for the full spectrum of the mechanical properties of NWs in a double clamped beam configuration, ranging from elasticity to plasticity and failure. It is found that, loading rate exerts significant influence to the mechanical behaviours of nanowires (NWs). Specifically, a loading rate lower than 10 m/s is found reasonable for a homogonous bending deformation. Both loading rate and potential between the tip and the NW are found to play an important role in the adhesive phenomenon. The force versus displacement (F-d) curve from MD simulation is highly consistent in shapes with that from experiments. Symmetrical F-d curves during loading and unloading processes are observed, which reveal the linear-elastic and non-elastic bending deformation of NWs. The typical bending induced tensile-compressive features are observed. Meanwhile, the simulation results are excellently fitted by the classical Euler-Bernoulli beam theory with axial effect. It is concluded that, axial tensile force becomes crucial in bending deformation when the beam size is down to nanoscale for double clamped NWs. In addition, we find shorter NWs will have an earlier yielding and a larger yielding force. Mechanical properties (Young’s modulus & yield strength) obtained from both bending and tensile deformations are found comparable with each other. Specifically, the modulus is essentially similar under these two loading methods, while the yield strength during bending is observed larger than that during tension.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.