209 resultados para data structures
Resumo:
Reinforced concrete structures are susceptible to a variety of deterioration mechanisms due to creep and shrinkage, alkali-silica reaction (ASR), carbonation, and corrosion of the reinforcement. The deterioration problems can affect the integrity and load carrying capacity of the structure. Substantial research has been dedicated to these various mechanisms aiming to identify the causes, reactions, accelerants, retardants and consequences. This has improved our understanding of the long-term behaviour of reinforced concrete structures. However, the strengthening of reinforced concrete structures for durability has to date been mainly undertaken after expert assessment of field data followed by the development of a scheme to both terminate continuing degradation, by separating the structure from the environment, and strengthening the structure. The process does not include any significant consideration of the residual load-bearing capacity of the structure and the highly variable nature of estimates of such remaining capacity. Development of performance curves for deteriorating bridge structures has not been attempted due to the difficulty in developing a model when the input parameters have an extremely large variability. This paper presents a framework developed for an asset management system which assesses residual capacity and identifies the most appropriate rehabilitation method for a given reinforced concrete structure exposed to aggressive environments. In developing the framework, several industry consultation sessions have been conducted to identify input data required, research methodology and output knowledge base. Capturing expert opinion in a useable knowledge base requires development of a rule based formulation, which can subsequently be used to model the reliability of the performance curve of a reinforced concrete structure exposed to a given environment.
Resumo:
Migraine is a painful disorder for which the etiology remains obscure. Diagnosis is largely based on International Headache Society criteria. However, no feature occurs in all patients who meet these criteria, and no single symptom is required for diagnosis. Consequently, this definition may not accurately reflect the phenotypic heterogeneity or genetic basis of the disorder. Such phenotypic uncertainty is typical for complex genetic disorders and has encouraged interest in multivariate statistical methods for classifying disease phenotypes. We applied three popular statistical phenotyping methods—latent class analysis, grade of membership and grade of membership “fuzzy” clustering (Fanny)—to migraine symptom data, and compared heritability and genome-wide linkage results obtained using each approach. Our results demonstrate that different methodologies produce different clustering structures and non-negligible differences in subsequent analyses. We therefore urge caution in the use of any single approach and suggest that multiple phenotyping methods be used.
Resumo:
A teaching and learning development project is currently under way at Queens-land University of Technology to develop advanced technology videotapes for use with the delivery of structural engineering courses. These tapes consist of integrated computer and laboratory simulations of important concepts, and behaviour of structures and their components for a number of structural engineering subjects. They will be used as part of the regular lectures and thus will not only improve the quality of lectures and learning environment, but also will be able to replace the ever-dwindling laboratory teaching in these subjects. The use of these videotapes, developed using advanced computer graphics, data visualization and video technologies, will enrich the learning process of the current diverse engineering student body. This paper presents the details of this new method, the methodology used, the results and evaluation in relation to one of the structural engineering subjects, steel structures.
Resumo:
Acoustic emission (AE) technique is one of the popular diagnostic techniques used for structural health monitoring of mechanical, aerospace and civil structures. But several challenges still exist in successful application of AE technique. This paper explores various tools for analysis of recorded AE data to address two primary challenges: discriminating spurious signals from genuine signals and devising ways to quantify damage levels.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
The Internet presents a constantly evolving frontier for criminology and policing, especially in relation to online predators – paedophiles operating within the Internet for safer access to children, child pornography and networking opportunities with other online predators. The goals of this qualitative study are to undertake behavioural research – identify personality types and archetypes of online predators and compare and contrast them with behavioural profiles and other psychological research on offline paedophiles and sex offenders. It is also an endeavour to gather intelligence on the technological utilisation of online predators and conduct observational research on the social structures of online predator communities. These goals were achieved through the covert monitoring and logging of public activity within four Internet Relay Chat(rooms) (IRC) themed around child sexual abuse and which were located on the Undernet network. Five days of monitoring was conducted on these four chatrooms between Wednesday 1 to Sunday 5 April 2009; this raw data was collated and analysed. The analysis identified four personality types – the gentleman predator, the sadist, the businessman and the pretender – and eight archetypes consisting of the groomers, dealers, negotiators, roleplayers, networkers, chat requestors, posters and travellers. The characteristics and traits of these personality types and archetypes, which were extracted from the literature dealing with offline paedophiles and sex offenders, are detailed and contrasted against the online sexual predators identified within the chatrooms, revealing many similarities and interesting differences particularly with the businessman and pretender personality types. These personality types and archetypes were illustrated by selecting users who displayed the appropriate characteristics and tracking them through the four chatrooms, revealing intelligence data on the use of proxies servers – especially via the Tor software – and other security strategies such as Undernet’s host masking service. Name and age changes, which is used as a potential sexual grooming tactic was also revealed through the use of Analyst’s Notebook software and information on ISP information revealed the likelihood that many online predators were not using any safety mechanism and relying on the anonymity of the Internet. The activities of these online predators were analysed, especially in regards to child sexual grooming and the ‘posting’ of child pornography, which revealed a few of the methods in which online predators utilised new Internet technologies to sexually groom and abuse children – using technologies such as instant messengers, webcams and microphones – as well as store and disseminate illegal materials on image sharing websites and peer-to-peer software such as Gigatribe. Analysis of the social structures of the chatrooms was also carried out and the community functions and characteristics of each chatroom explored. The findings of this research have indicated several opportunities for further research. As a result of this research, recommendations are given on policy, prevention and response strategies with regards to online predators.
Applying incremental EM to Bayesian classifiers in the learning of hyperspectral remote sensing data
Resumo:
In this paper, we apply the incremental EM method to Bayesian Network Classifiers to learn and interpret hyperspectral sensor data in robotic planetary missions. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. Many spacecraft carry spectroscopic equipment as wavelengths outside the visible light in the electromagnetic spectrum give much greater information about an object. The algorithm used is an extension to the standard Expectation Maximisation (EM). The incremental method allows us to learn and interpret the data as they become available. Two Bayesian network classifiers were tested: the Naive Bayes, and the Tree-Augmented-Naive Bayes structures. Our preliminary experiments show that incremental learning with unlabelled data can improve the accuracy of the classifier.
Resumo:
Background: In vitro investigations have demonstrated the importance of the ribcage in stabilising the thoracic spine. Surgical alterations of the ribcage may change load-sharing patterns in the thoracic spine. Computer models are used in this study to explore the effect of surgical disruption of the rib-vertebrae connections on ligament load-sharing in the thoracic spine. Methods: A finite element model of a T7-8 motion segment, including the T8 rib, was developed using CT-derived spinal anatomy for the Visible Woman. Both the intact motion segment and the motion segment with four successive stages of destabilization (discectomy and removal of right costovertebral joint, right costotransverse joint and left costovertebral joint) were analysed for a 2000Nmm moment in flexion/extension, lateral bending and axial rotation. Joint rotational moments were compared with existing in vitro data and a detailed investigation of the load sharing between the posterior ligaments carried out. Findings: The simulated motion segment demonstrated acceptable agreement with in vitro data at all stages of destabilization. Under lateral bending and axial rotation, the costovertebral joints were of critical importance in resisting applied moments. In comparison to the intact joint, anterior destabilization increases the total moment contributed by the posterior ligaments. Interpretation: Surgical removal of the costovertebral joints may lead to excessive rotational motion in a spinal joint, increasing the risk of overload and damage to the remaining ligaments. The findings of this study are particularly relevant for surgical procedures involving rib head resection, such as some techniques for scoliosis deformity correction.
Resumo:
In the last few years we have observed a proliferation of approaches for clustering XML docu- ments and schemas based on their structure and content. The presence of such a huge amount of approaches is due to the different applications requiring the XML data to be clustered. These applications need data in the form of similar contents, tags, paths, structures and semantics. In this paper, we first outline the application contexts in which clustering is useful, then we survey approaches so far proposed relying on the abstract representation of data (instances or schema), on the identified similarity measure, and on the clustering algorithm. This presentation leads to draw a taxonomy in which the current approaches can be classified and compared. We aim at introducing an integrated view that is useful when comparing XML data clustering approaches, when developing a new clustering algorithm, and when implementing an XML clustering compo- nent. Finally, the paper moves into the description of future trends and research issues that still need to be faced.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.
Resumo:
This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.
Resumo:
Finite element frame analysis programs targeted for design office application necessitate algorithms which can deliver reliable numerical convergence in a practical timeframe with comparable degrees of accuracy, and a highly desirable attribute is the use of a single element per member to reduce computational storage, as well as data preparation and the interpretation of the results. To this end, a higher-order finite element method including geometric non-linearity is addressed in the paper for the analysis of elastic frames for which a single element is used to model each member. The geometric non-linearity in the structure is handled using an updated Lagrangian formulation, which takes the effects of the large translations and rotations that occur at the joints into consideration by accumulating their nodal coordinates. Rigid body movements are eliminated from the local member load-displacement relationship for which the total secant stiffness is formulated for evaluating the large member deformations of an element. The influences of the axial force on the member stiffness and the changes in the member chord length are taken into account using a modified bowing function which is formulated in the total secant stiffness relationship, for which the coupling of the axial strain and flexural bowing is included. The accuracy and efficiency of the technique is verified by comparisons with a number of plane and spatial structures, whose structural response has been reported in independent studies.
Resumo:
A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.
Resumo:
Many activities, from disaster response to project management, require cooperation among people from multiple organizations who initially lack interpersonal relationships and trust. Upon entering inter-organizational settings, pre-existing identities and expectations, along with emergent social roles and structures, may all influence trust between colleagues. To sort out these effects, we collected time-lagged data from three cohorts of military MBA students, representing 2,224 directed dyads, shortly after they entered graduate school. Dyads that shared organizational identity, boundary-spanning roles, and similar network positions (structural equivalence) were likely to have stronger professional ties and greater trust.