967 resultados para Domain Engineering
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques.
Resumo:
Chlamydia trachomatis is a bacterial pathogen responsible for one of the most prevalent sexually transmitted infections worldwide. Its unique development cycle has limited our understanding of its pathogenic mechanisms. However, CtHtrA has recently been identified as a potential C. trachomatis virulence factor. CtHtrA is a tightly regulated quality control protein with a monomeric structural unit comprised of a chymotrypsin-like protease domain and two PDZ domains. Activation of proteolytic activity relies on the C-terminus of the substrate allosterically binding to the PDZ1 domain, which triggers subsequent conformational change and oligomerization of the protein into 24-mers enabling proteolysis. This activation is mediated by a cascade of precise structural arrangements, but the specific CtHtrA residues and structural elements required to facilitate activation are unknown. Using in vitro analysis guided by homology modeling, we show that the mutation of residues Arg362 and Arg224, predicted to disrupt the interaction between the CtHtrA PDZ1 domain and loop L3, and between loop L3 and loop LD, respectively, are critical for the activation of proteolytic activity. We also demonstrate that mutation to residues Arg299 and Lys160, predicted to disrupt PDZ1 domain interactions with protease loop LC and strand β5, are also able to influence proteolysis, implying their involvement in the CtHtrA mechanism of activation. This is the first investigation of protease loop LC and strand β5 with respect to their potential interactions with the PDZ1 domain. Given their high level of conservation in bacterial HtrA, these structural elements may be equally significant in the activation mechanism of DegP and other HtrA family members.
Resumo:
Construction works are project-based and interdisciplinary. Many construction management (CM) problems are ill defined. The knowledge required to address such problems is not readily available and mostly tacit in nature. Moreover, the researchers, especially the students in the higher education, often face difficulty in defining the research problem, adopting an appropriate research process and methodology for designing and validating their research. This paper describes a ‘Horseshoe’ research process approach and its application to address a research problem of extracting construction-relevant information from a building information model (BIM). It describes the different steps of the process for understanding a problem, formulating appropriate research question/s, defining different research tasks, including a methodology for developing, implementing and validating the research. It is argued that a structure research approach and the use of mixed research methods would provide a sound basis for research design and validation in order to make contribution to existing knowledge.
Resumo:
Information experience has emerged as a new and dynamic field of information research in recent years. This chapter will discuss and explore information experience in two distinct ways: (a) as a research object, and; (b) as a research domain. Two recent studies will provide the context for this exploration. The first study investigated the information experiences of people using social media (e.g., Facebook, Twitter, YouTube) during natural disasters. Data was gathered by in-depth semi-structured interviews with 25 participants, from two areas affected by natural disasters (i.e., Brisbane and Townsville). The second study investigated the qualitatively different ways in which people experienced information literacy during a natural disaster. Using phenomenography, data was collected via semi-structured interviews with 7 participants. These studies represent two related yet different investigations. Taken together the studies provide a means to critically debate and reflect upon our evolving understandings of information experience, both as a research object and as a research domain. This chapter presents our preliminary reflections and concludes that further research is needed to develop and strengthen our conceptualisation of this emerging area.
Resumo:
Diagnostics of rolling element bearings have been traditionally developed for constant operating conditions, and sophisticated techniques, like Spectral Kurtosis or Envelope Analysis, have proven their effectiveness by means of experimental tests, mainly conducted in small-scale laboratory test-rigs. Algorithms have been developed for the digital signal processing of data collected at constant speed and bearing load, with a few exceptions, allowing only small fluctuations of these quantities. Owing to the spreading of condition based maintenance in many industrial fields, in the last years a need for more flexible algorithms emerged, asking for compatibility with highly variable operating conditions, such as acceleration/deceleration transients. This paper analyzes the problems related with significant speed and load variability, discussing in detail the effect that they have on bearing damage symptoms, and propose solutions to adapt existing algorithms to cope with this new challenge. In particular, the paper will i) discuss the implication of variable speed on the applicability of diagnostic techniques, ii) address quantitatively the effects of load on the characteristic frequencies of damaged bearings and iii) finally present a new approach for bearing diagnostics in variable conditions, based on envelope analysis. The research is based on experimental data obtained by using artificially damaged bearings installed on a full scale test-rig, equipped with actual train traction system and reproducing the operation on a real track, including all the environmental noise, owing to track irregularity and electrical disturbances of such a harsh application.
Resumo:
Visual information is central to several of the scientific disciplines. This paper studies how scientists working in a multidisciplinary field produce scientific evidence through building and manipulating scientific visualizations. Using ethnographic methods, we studied visualization practices of eight scientists working in the domain of tissue engineering research. Tissue engineering is an upcoming field of research that deals with replacing or regenerating human cells, tissues, or organs to restore or establish normal function. We spent 3 months in the field, where we recorded laboratory sessions of these scientists and used semi-structured interviews to get an insight into their visualization practices. From our results, we elicit two themes characterizing their visualization practices: multiplicity and physicality. In this article, we provide several examples of scientists’ visualization practices to describe these two themes and show that multimodality of such practices plays an important role in scientific visualization.
Resumo:
A sub‒domain smoothed Galerkin method is proposed to integrate the advantages of mesh‒free Galerkin method and FEM. Arbitrarily shaped sub‒domains are predefined in problems domain with mesh‒free nodes. In each sub‒domain, based on mesh‒free Galerkin weak formulation, the local discrete equation can be obtained by using the moving Kriging interpolation, which is similar to the discretization of the high‒order finite elements. Strain smoothing technique is subsequently applied to the nodal integration of sub‒domain by dividing the sub‒domain into several smoothing cells. Moreover, condensation of DOF can also be introduced into the local discrete equations to improve the computational efficiency. The global governing equations of present method are obtained on the basis of the scheme of FEM by assembling all local discrete equations of the sub‒domains. The mesh‒free properties of Galerkin method are retained in each sub‒domain. Several 2D elastic problems have been solved on the basis of this newly proposed method to validate its computational performance. These numerical examples proved that the newly proposed sub‒domain smoothed Galerkin method is a robust technique to solve solid mechanics problems based on its characteristics of high computational efficiency, good accuracy, and convergence.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
This article deals with time-domain hydroelastic analysis of a marine structure. The convolution terms associated with fluid memory effects are replaced by an alternative state-space representation, the parameters of which are obtained by using realization theory. The mathematical model established is validated by comparison to experimental results of a very flexible barge. Two types of time-domain simulations are performed: dynamic response of the initially inert structure to incident regular waves and transient response of the structure after it is released from a displaced condition in still water. The accuracy and the efficiency of the simulations based on the state-space model representations are compared to those that integrate the convolutions.
Resumo:
The dynamics describing the motion response of a marine structure in waves can be represented within a linear framework by the Cummins Equation. This equation contains a convolution term that represents the component of the radiation forces associated with fluid memory effects. Several methods have been proposed in the literature for the identification of parametric models to approximate and replace this convolution term. This replacement can facilitate the model implementation in simulators and the analysis of motion control designs. Some of the reported identification methods consider the problem in the time domain while other methods consider the problem in the frequency domain. This paper compares the application of these identification methods. The comparison is based not only on the quality of the estimated models, but also on the ease of implementation, ease of use, and the flexibility of the identification method to incorporate prior information related to the model being identified. To illustrate the main points arising from the comparison, a particular example based on the coupled vertical motion of a modern containership vessel is presented.
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
We present results of computational simulations of tungsten-inert-gas and metal-inert-gas welding. The arc plasma and the electrodes (including the molten weld pool when necessary) are included self-consistently in the computational domain. It is shown, using three examples, that it would be impossible to accurately estimate the boundary conditions on the weld-pool surface without including the arc plasma in the computational domain. First, we show that the shielding gas composition strongly affects the properties of the arc that influence the weld pool: heat flux density, current density, shear stress and arc pressure at the weld-pool surface. Demixing is found to be important in some cases. Second, the vaporization of the weld-pool metal and the diffusion of the metal vapour into the arc plasma are found to decrease the heat flux density and current density to the weld pool. Finally, we show that the shape of the wire electrode in metal-inert-gas welding has a strong influence on flow velocities in the arc and the pressure and shear stress at the weld-pool surface. In each case, we present evidence that the geometry and depth of the weld pool depend strongly on the properties of the arc.
Resumo:
We investigate the blend morphology and performance of bulk heterojunction organic photovoltaic devices comprising the donor polymer, pDPP-TNT (poly{3,6-dithiophene-2-yl-2,5-di(2-octyldodecyl)-pyrrolo[3,4-c]pyrrole-1, 4-dione-alt-naphthalene}) and the fullerene acceptor, [70]PCBM ([6,6]-phenyl C71-butyric acid methyl ester). The blend morphology is heavily dependent upon the solvent system used in the fabrication of thin films. Thin films spin-coated from chloroform possess a cobblestone-like morphology, consisting of thick, round-shaped [70]PCBM-rich mounds separated by thin polymer-rich valleys. The size of the [70]PCBM domains is found to depend on the overall film thickness. Thin films spin-coated from a chloroform:dichlorobenzene mixed solvent system are smooth and consist of a network of pDPP-TNT nanofibers embedded in a [70]PCBM-rich matrix. Rinsing the films in hexane selectively removes [70]PCBM and allows for analysis of domain size and purity. It also provides a means for investigating exciton dissociation efficiency through relative photoluminescence yield measurements. Devices fabricated from chloroform solutions show much poorer performance than the devices fabricated from the mixed solvent system; this disparity in performance is seen to be more pronounced with increasing film thickness. The primary cause for the improved performance of devices fabricated from mixed solvents is attributed to the greater donor-acceptor interfacial area and resulting greater capacity for charge carrier generation.
Resumo:
With the increasing importance of Application Domain Specific Processor (ADSP) design, a significant challenge is to identify special-purpose operations for implementation as a customized instruction. While many methodologies have been proposed for this purpose, they all work for a single algorithm chosen from the target application domain. Such algorithm-specific approaches are not suitable for designing instruction sets applicable to a whole family of related algorithms. For an entire range of related algorithms, this paper develops a methodology for identifying compound operations, as a basis for designing “domain-specific” Instruction Set Architectures (ISAs) that can efficiently run most of the algorithms in a given domain. Our methodology combines three different static analysis techniques to identify instruction sequences common to several related algorithms: identification of (non-branching) instruction sequences that occur commonly across the algorithms; identification of instruction sequences nested within iterative constructs that are thus executed frequently; and identification of commonly-occurring instruction sequences that span basic blocks. Choosing different combinations of these results enables us to design domain-specific special operations with different desired characteristics, such as performance or suitability as a library function. To demonstrate our approach, case studies are carried out for a family of thirteen string matching algorithms. Finally, the validity of our static analysis results is confirmed through independent dynamic analysis experiments and performance improvement measurements.
Resumo:
In this age of ever-increasing information technology (IT) driven environments, governments/or public sector organisations (PSOs) are expected to demonstrate the business value of the investment in IT and take advantage of the opportunities offered by technological advancements. Strategic alignment (SA) emerged as a mechanism to bridge the gap between business and IT missions, objectives, and plans in order to ensure value optimisation from investment in IT and enhance organisational performance. However, achieving and sustaining SA remains a challenge requiring even more agility nowadays to keep up with turbulent organisational environments. The shared domain knowledge (SDK) between the IT department and other diverse organisational groups is considered as one of the factors influencing the successful implementation of SA. However, SDK in PSOs has received relatively little empirical attention. This paper presents findings from a study which investigated the influence of SDK on SA within organisations in the Australian public sector. The developed research model examined the relationship of SDK between business and IT domains with SA using a survey of 56 public sector professionals and executives. A key research contribution is the empirical demonstration that increasing levels of SDK between IT and business groups leads to increased SA.