907 resultados para secondary structure detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As organizations reach to higher levels of business process management maturity, they often find themselves maintaining repositories of hundreds or even thousands of process models, representing valuable knowledge about their operations. Over time, process model repositories tend to accumulate duplicate fragments (also called clones) as new process models are created or extended by copying and merging fragments from other models. This calls for methods to detect clones in process models, so that these clones can be refactored as separate subprocesses in order to improve maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. The proposed index is based on a novel combination of a method for process model decomposition (specifically the Refined Process Structure Tree), with established graph canonization and string matching techniques. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spectrum sensing optimisation techniques maximise the efficiency of spectrum sensing while satisfying a number of constraints. Many optimisation models consider the possibility of the primary user changing activity state during the secondary user's transmission period. However, most ignore the possibility of activity change during the sensing period. The observed primary user signal during sensing can exhibit a duty cycle which has been shown to severely degrade detection performance. This paper shows that (a) the probability of state change during sensing cannot be neglected and (b) the true detection performance obtained when incorporating the duty cycle of the primary user signal can deviate significantly from the results expected with the assumption of no such duty cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boundaries are an important field of study because they mediate almost every aspect of organizational life. They are becoming increasingly more important as organizations change more frequently and yet, despite the endemic use of the boundary metaphor in common organizational parlance, they are poorly understood. Organizational boundaries are under-theorized and researchers in related fields often simply assume their existence, without defining them. The literature on organizational boundaries is fragmented with no unifying theoretical basis. As a result, when it is recognized that an organizational boundary is "dysfunctional". there is little recourse to models on which to base remediating action. This research sets out to develop just such a theoretical model and is guided by the general question: "What is the nature of organizational boundaries?" It is argued that organizational boundaries can be conceptualised through elements of both social structure and of social process. Elements of structure include objects, coupling, properties and identity. Social processes include objectification, identification, interaction and emergence. All of these elements are integrated by a core category, or basic social process, called boundary weaving. An organizational boundary is a complex system of objects and emergent properties that are woven together by people as they interact together, objectifying the world around them, identifying with these objects and creating couplings of varying strength and polarity as well as their own fragmented identity. Organizational boundaries are characterised by the multiplicity of interconnections, a particular domain of objects, varying levels of embodiment and patterns of interaction. The theory developed in this research emerged from an exploratory, qualitative research design employing grounded theory methodology. The field data was collected from the training headquarters of the New Zealand Army using semi-structured interviews and follow up observations. The unit of analysis is an organizational boundary. Only one research context was used because of the richness and multiplicity of organizational boundaries that were present. The model arose, grounded in the data collected, through a process of theoretical memoing and constant comparative analysis. Academic literature was used as a source of data to aid theory development and the saturation of some central categories. The final theory is classified as middle range, being substantive rather than formal, and is generalizable across medium to large organizations in low-context societies. The main limitation of the research arose from the breadth of the research with multiple lines of inquiry spanning several academic disciplines, with some relevant areas such as the role of identity and complexity being addressed at a necessarily high level. The organizational boundary theory developed by this research replaces the typology approaches, typical of previous theory on organizational boundaries and reconceptualises the nature of groups in organizations as well as the role of "boundary spanners". It also has implications for any theory that relies on the concept of boundaries, such as general systems theory. The main contribution of this research is the development of a holistic model of organizational boundaries including an explanation of the multiplicity of boundaries . no organization has a single definable boundary. A significant aspect of this contribution is the integration of aspects of complexity theory and identity theory to explain the emergence of higher-order properties of organizational boundaries and of organizational identity. The core category of "boundary weaving". is a powerful new metaphor that significantly reconceptualises the way organizational boundaries may be understood in organizations. It invokes secondary metaphors such as the weaving of an organization's "boundary fabric". and provides managers with other metaphorical perspectives, such as the management of boundary friction, boundary tension, boundary permeability and boundary stability. Opportunities for future research reside in formalising and testing the theory as well as developing analytical tools that would enable managers in organizations to apply the theory in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to their large surface area, complex chemical composition and high alveolar deposition rate, ultrafine particles (UFPs) (< 0.1 ìm) pose a significant risk to human health and their toxicological effects have been acknowledged by the World Health Organisation. Since people spend most of their time indoors, there is a growing concern about the UFPs present in some indoor environments. Recent studies have shown that office machines, in particular laser printers, are a significant indoor source of UFPs. The majority of printer-generated UFPs are organic carbon and it is unlikely that these particles are emitted directly from the printer or its supplies (such as paper and toner powder). Thus, it was hypothesised that these UFPs are secondary organic aerosols (SOA). Considering the widespread use of printers and human exposure to these particles, understanding the processes involved in particle formation is of critical importance. However, few studies have investigated the nature (e.g. volatility, hygroscopicity, composition, size distribution and mixing state) and formation mechanisms of these particles. In order to address this gap in scientific knowledge, a comprehensive study including state-of-art instrumental methods was conducted to characterise the real-time emissions from modern commercial laser printers, including particles, volatile organic compounds (VOCs) and ozone (O3). The morphology, elemental composition, volatility and hygroscopicity of generated particles were also examined. The large set of experimental results was analysed and interpreted to provide insight into: (1) Emissions profiles of laser printers: The results showed that UFPs dominated the number concentrations of generated particles, with a quasi unimodal size distribution observed for all tests. These particles were volatile, non-hygroscopic and mixed both externally and internally. Particle microanalysis indicated that semi-volatile organic compounds occupied the dominant fraction of these particles, with only trace quantities of particles containing Ca and Fe. Furthermore, almost all laser printers tested in this study emitted measurable concentrations of VOCs and O3. A positive correlation between submicron particles and O3 concentrations, as well as a contrasting negative correlation between submicron particles and total VOC concentrations were observed during printing for all tests. These results proved that UFPs generated from laser printers are mainly SOAs. (2) Sources and precursors of generated particles: In order to identify the possible particle sources, particle formation potentials of both the printer components (e.g. fuser roller and lubricant oil) and supplies (e.g. paper and toner powder) were investigated using furnace tests. The VOCs emitted during the experiments were sampled and identified to provide information about particle precursors. The results suggested that all of the tested materials had the potential to generate particles upon heating. Nine unsaturated VOCs were identified from the emissions produced by paper and toner, which may contribute to the formation of UFPs through oxidation reactions with ozone. (3) Factors influencing the particle emission: The factors influencing particle emissions were also investigated by comparing two popular laser printers, one showing particle emissions three orders of magnitude higher than the other. The effects of toner coverage, printing history, type of paper and toner, and working temperature of the fuser roller on particle number emissions were examined. The results showed that the temperature of the fuser roller was a key factor driving the emission of particles. Based on the results for 30 different types of laser printers, a systematic positive correlation was observed between temperature and particle number emissions for printers that used the same heating technology and had a similar structure and fuser material. It was also found that temperature fluctuations were associated with intense bursts of particles and therefore, they may have impact on the particle emissions. Furthermore, the results indicated that the type of paper and toner powder contributed to particle emissions, while no apparent relationship was observed between toner coverage and levels of submicron particles. (4) Mechanisms of SOA formation, growth and ageing: The overall hypothesis that UFPs are formed by reactions with the VOCs and O3 emitted from laser printers was examined. The results proved this hypothesis and suggested that O3 may also play a role in particle ageing. In addition, knowledge about the mixing state of generated particles was utilised to explore the detailed processes of particle formation for different printing scenarios, including warm-up, normal printing, and printing without toner. The results indicated that polymerisation may have occurred on the surface of the generated particles to produce thermoplastic polymers, which may account for the expandable characteristics of some particles. Furthermore, toner and other particle residues on the idling belt from previous print jobs were a very clear contributing factor in the formation of laser printer-emitted particles. In summary, this study not only improves scientific understanding of the nature of printer-generated particles, but also provides significant insight into the formation and ageing mechanisms of SOAs in the indoor environment. The outcomes will also be beneficial to governments, industry and individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As organizations reach higher levels of business process management maturity, they often find themselves maintaining very large process model repositories, representing valuable knowledge about their operations. A common practice within these repositories is to create new process models, or extend existing ones, by copying and merging fragments from other models. We contend that if these duplicate fragments, a.k.a. ex- act clones, can be identified and factored out as shared subprocesses, the repository’s maintainability can be greatly improved. With this purpose in mind, we propose an indexing structure to support fast detection of clones in process model repositories. Moreover, we show how this index can be used to efficiently query a process model repository for fragments. This index, called RPSDAG, is based on a novel combination of a method for process model decomposition (namely the Refined Process Structure Tree), with established graph canonization and string matching techniques. We evaluated the RPSDAG with large process model repositories from industrial practice. The experiments show that a significant number of non-trivial clones can be efficiently found in such repositories, and that fragment queries can be handled efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diabetic neuropathy is a significant clinical problem that currently has no effective therapy, and in advanced cases, leads to foot ulceration and lower limb amputation. The accurate detection, characterisation and quantification of this condition are important in order to define at-risk patients, anticipate deterioration, monitor progression and assess new therapies. This thesis evaluates novel corneal methods of assessing diabetic neuropathy. Over the past several years two new non-invasive corneal markers have emerged, and in cross-sectional studies have demonstrated their ability to stratify the severity of this disease. Corneal confocal microscopy (CCM) allows quantification of corneal nerve parameters and non-contact corneal aesthesiometry (NCCA), the presumed functional correlate of corneal structure, assesses the sensitivity of the cornea. Both these techniques are quick to perform, produce little or no discomfort for the patient, and with automatic analysis paradigms developed, are suitable for clinical settings. Each has advantages and disadvantages over established techniques for assessing diabetic neuropathy. New information is presented regarding measurement bias of CCM images, and a unique sampling paradigm and associated accuracy determination method of combinations is described. A novel high-speed corneal nerve mapping procedure has been developed and application of this procedure in individuals with neuropathy has revealed regions of sub-basal nerve plexus that dictate further evaluation, as they appear to show earlier signs of damage than the central region of the cornea that has to date been examined. The discriminative capacity of corneal sensitivity measured by NCCA is revealed to have reasonable potential as a marker of diabetic neuropathy. Application of these new corneal markers for longitudinal evaluation of diabetic neuropathy has the potential to reduce dependence on more invasive, costly, and time-consuming assessments, such as skin biopsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics simulations were carried out on single chain models of linear low-density polyethylene in vacuum to study the effects of branch length, branch content, and branch distribution on the polymer’s crystalline structure at 300 K. The trans/gauche (t/g) ratios of the backbones of the modeled molecules were calculated and utilized to characterize their degree of crystallinity. The results show that the t/g ratio decreases with increasing branch content regardless of branch length and branch distribution, indicating that branch content is the key molecular parameter that controls the degree of crystallinity. Although t/g ratios of the models with the same branch content vary, they are of secondary importance. However, our data suggests that branch distribution (regular or random) has a significant effect on the degree of crystallinity for models containing 10 hexyl branches/1,000 backbone carbons. The fractions of branches that resided in the equilibrium crystalline structures of the models were also calculated. On average, 9.8% and 2.5% of the branches were found in the crystallites of the molecules with ethyl and hexyl branches while C13 NMR experiments showed that the respective probabilities of branch inclusion for ethyl and hexyl branches are 10% and 6% [Hosoda et al., Polymer 1990, 31, 1999–2005]. However, the degree of branch inclusion seems to be insensitive to the branch content and branch distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In practical cases for active noise control (ANC), the secondary path has usually a time varying behavior. For these cases, an online secondary path modeling method that uses a white noise as a training signal is required to ensure convergence of the system. The modeling accuracy and the convergence rate are increased when a white noise with a larger variance is used. However, the larger variance increases the residual noise, which decreases performance of the system and additionally causes instability problem to feedback structures. A sudden change in the secondary path leads to divergence of the online secondary path modeling filter. To overcome these problems, this paper proposes a new approach for online secondary path modeling in feedback ANC systems. The proposed algorithm uses the advantages of white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the algorithm and to prevent the instability effect of the white noise. In this approach, instead of continuous injection of the white noise, a sudden change in secondary path during the operation makes the algorithm to reactivate injection of the white noise to correct the secondary path estimation. In addition, the proposed method models the secondary path without the need of using off-line estimation of the secondary path. Considering the above features increases the convergence rate and modeling accuracy, which results in a high system performance. Computer simulation results shown in this paper indicate effectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meyerhofferite is a calcium hydrated borate mineral with ideal formula: CaB3O3(OH)5�H2O and occurs as white complex acicular to crude crystals with length up to �4 cm, in fibrous divergent, radiating aggregates or reticulated and is often found in sedimentary or lake-bed borate deposits. The Raman spectrum of meyerhofferite is dominated by intense sharp band at 880 cm�1 assigned to the symmetric stretching mode of trigonal boron. Broad Raman bands at 1046, 1110, 1135 and 1201 cm�1 are attributed to BOH in-plane bending modes. Raman bands in the 900–1000 cm�1 spectral region are assigned to the antisymmetric stretching of tetrahedral boron. Distinct OH stretching Raman bands are observed at 3400, 3483 and 3608 cm�1. The mineral meyerhofferite has a distinct Raman spectrum which is different from the spectrum of other borate minerals, making Raman spectroscopy a very useful tool for the detection of meyerhofferite in sedimentary and lake bed deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of insect pests in grain storages throughout the supply chain is a significant problem for farmers, grain handlers, and distributors world-wide. Insect monitoring and sampling programmes are used in the stored grains industry for the detection and estimation of pest populations. At the low pest densities dictated by economic and commercial requirements, the accuracy of both detection and abundance estimates can be influenced by variations in the spatial structure of pest populations over short distances. Geostatistical analysis of Rhyzopertha dominica populations in 2 and 3 dimensions showed that insect numbers were positively correlated over short (0.5 cm) distances, and negatively correlated over longer (.10 cm) distances. At 35 C, insects were located significantly further from the grain surface than at 25 and 30 C. Dispersion metrics showed statistically significant aggregation in all cases. The observed heterogeneous spatial distribution of R. dominica may also be influenced by factors such as the site of initial infestation and disturbance during handling. To account for these additional factors, I significantly extended a simulation model that incorporates both pest growth and movement through a typical stored-grain supply chain. By incorporating the effects of abundance, initial infestation site, grain handling, and treatment on pest spatial distribution, I developed a supply chain model incorporating estimates of pest spatial distribution. This was used to examine several scenarios representative of grain movement through a supply chain, and determine the influence of infestation location and grain disturbance on the sampling intensity required to detect pest infestations at various infestation rates. This study has investigated the effects of temperature, infestation point, and grain handling on the spatial distribution and detection of R. dominica. The proportion of grain infested was found to be dependent upon abundance, initial pest location, and grain handling. Simulation modelling indicated that accounting for these factors when developing sampling strategies for stored grain has the potential to significantly reduce sampling costs while simultaneously improving detection rate, resulting in reduced storage and pest management cost while improving grain quality.