927 resultados para feature inspection method
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
This article reports on the design and implementation of a Computer-Aided Die Design System (CADDS) for sheet-metal blanks. The system is designed by considering several factors, such as the complexity of blank geometry, reduction in scrap material, production requirements, availability of press equipment and standard parts, punch profile complexity, and tool elements manufacturing method. The interaction among these parameters and how they affect designers' decision patterns is described. The system is implemented by interfacing AutoCAD with the higher level languages FORTRAN 77 and AutoLISP. A database of standard die elements is created by parametric programming, which is an enhanced feature of AutoCAD. The greatest advantage achieved by the system is the rapid generation of the most efficient strip and die layouts, including information about the tool configuration.
Resumo:
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
Biological systems involving proliferation, migration and death are observed across all scales. For example, they govern cellular processes such as wound-healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behaviour. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pair-wise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplication, in the form of a partial differential equation description for the evolution of pair-wise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behaviour in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before, and our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.
Resumo:
1. Autonomous acoustic recorders are widely available and can provide a highly efficient method of species monitoring, especially when coupled with software to automate data processing. However, the adoption of these techniques is restricted by a lack of direct comparisons with existing manual field surveys. 2. We assessed the performance of autonomous methods by comparing manual and automated examination of acoustic recordings with a field-listening survey, using commercially available autonomous recorders and custom call detection and classification software. We compared the detection capability, time requirements, areal coverage and weather condition bias of these three methods using an established call monitoring programme for a nocturnal bird, the little spotted kiwi(Apteryx owenii). 3. The autonomous recorder methods had very high precision (>98%) and required <3% of the time needed for the field survey. They were less sensitive, with visual spectrogram inspection recovering 80% of the total calls detected and automated call detection 40%, although this recall increased with signal strength. The areal coverage of the spectrogram inspection and automatic detection methods were 85% and 42% of the field survey. The methods using autonomous recorders were more adversely affected by wind and did not show a positive association between ground moisture and call rates that was apparent from the field counts. However, all methods produced the same results for the most important conservation information from the survey: the annual change in calling activity. 4. Autonomous monitoring techniques incur different biases to manual surveys and so can yield different ecological conclusions if sampling is not adjusted accordingly. Nevertheless, the sensitivity, robustness and high accuracy of automated acoustic methods demonstrate that they offer a suitable and extremely efficient alternative to field observer point counts for species monitoring.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Facial landmarks play an important role in face recognition. They serve different steps of the recognition such as pose estimation, face alignment, and local feature extraction. Recently, cascaded shape regression has been proposed to accurately locate facial landmarks. A large number of weak regressors are cascaded in a sequence to fit face shapes to the correct landmark locations. In this paper, we propose to improve the method by applying gradual training. With this training, the regressors are not directly aimed to the true locations. The sequence instead is divided into successive parts each of which is aimed to intermediate targets between the initial and the true locations. We also investigate the incorporation of pose information in the cascaded model. The aim is to find out whether the model can be directly used to estimate head pose. Experiments on the Annotated Facial Landmarks in the Wild database have shown that the proposed method is able to improve the localization and give accurate estimates of pose.
Resumo:
In spite of the activism of professional bodies and researchers, empirical evidence shows that project management still does not deliver the expected benefits and promises. Hence, many have questioned the validity of the hegemonic rationalist paradigm anchored in the Enlightenment and Natural Sciences tradition supporting project management research and practice for the last 60 years and the lack of relevance to practice of the current conceptual base of project management. In order to address these limitations many authors, taking a post-modernist stance in social sciences, build on ‘pre-modern’ philosophies such as the Aristotelian one, specially emphasizing the role of praxis (activity), and phronesis (practical wisdom, prudence). Indeed, ‘Praxis … is the central category of the philosophy which is not merely an interpretation of the world, but is also a guide to its transformation …’ (Vazquez, 1977:. 149). Therefore, praxis offers an important focus for practitioners and researchers in social sciences, one in which theory is integrated with practice at the point of intervention. Simply stated, praxis can serve as a common ground for those interested in basic and applied research by providing knowledge of the reality in which action, informed by theory, takes place. Consequently, I suggest a ‘praxeological’ style of reasoning (praxeology being defined as study or science of human actions and conduct, including praxis, practices and phronesis) and to go beyond the ‘Theory-Practice’ divide. Moreover, I argue that we need to move away from the current dichotomy between the two classes ‘scholars experts-researchers’ and ‘managers/workers-practitioners-participants’. Considering one single class of ‘PraXitioner’, becoming a phronimos, may contribute to create new perspectives and open up new ways of thinking and acting in project situations. Thus, I call for a Perestroika in researching and acting in project management situations. My intent is to suggest a balanced praxeological view of the apparent opposition between social and natural science approaches. I explore, in this chapter, three key questions, covering the ontological, epistemological and praxeological dimensions of project management in action. 1. Are the research approaches being currently used appropriate for generating contributions that matter to both theory and practice with regards to what a ‘project’ is or to what we do when we call a specific situation ‘a project’? 2. On the basis of which intellectual virtues is the knowledge generated and what is the impact for theory and practice? 3. Are the modes of action of the practitioners ‘prudent’ and are they differentiating or reconciling formal and abstract rationality from substantive rationality and situated reasoning with regards to the mode of action they adopt in particular project situations? The investigation of the above questions leads me to debate about ‘Project Management-as-Praxis’, and to suggest ‘A’ (not ‘THE’) ‘praxeological’ style of reasoning and mode of inquiry – acknowledging a non-paradigmatic, subjective and kaleidoscopic perspective – for ‘Knowing-as-Practicing’ in project management. In short, this is about making a ‘Projects Science’ that matters.
Resumo:
Awareness to avoid losses and casualties due to rain-induced landslide is increasing in regions that routinely experience heavy rainfall. Improvements in early warning systems against rain-induced landslide such as prediction modelling using rainfall records, is urgently needed in vulnerable regions. The existing warning systems have been applied using stability chart development and real-time displacement measurement on slope surfaces. However, there are still some drawbacks such as: ignorance of rain-induced instability mechanism, mislead prediction due to the probabilistic prediction and short time for evacuation. In this research, a real-time predictive method was proposed to alleviate the drawbacks mentioned above. A case-study soil slope in Indonesia that failed in 2010 during rainfall was used to verify the proposed predictive method. Using the results from the field and laboratory characterizations, numerical analyses can be applied to develop a model of unsaturated residual soils slope with deep cracks and subject to rainwater infiltration. Real-time rainfall measurement in the slope and the prediction of future rainfall are needed. By coupling transient seepage and stability analysis, the variation of safety factor of the slope with time were provided as a basis to develop method for the real-time prediction of the rain-induced instability of slopes. This study shows the proposed prediction method has the potential to be used in an early warning system against landslide hazard, since the FOS value and the timing of the end-result of the prediction can be provided before the actual failure of the case study slope.
Resumo:
This project explores yarning as a methodology for understanding health and wellness from an indigenous woman's perspective. Previous research exploring indigenous Australian women's perspectives have used traditional Western methodologies and have often been felt by the women themselves to be inappropriate and ineffective in gathering information and promoting discussion. This research arose from the indigenous women themselves, and resulted in the exploration of using yarning as a methodology. Yarning is a conversational process that involves the sharing of stories and the development of knowledge. It prioritizes indigenous ways of communicating, in that it is culturally prescribed, cooperative, and respectful. The authors identify different types of yarning that are relevant throughout their research, and explain two types of yarning—family yarning and cross-cultural yarning—which have not been previously identified in research literature. This project found that yarning as a research method is appropriate for community-based health research with indigenous Australian women. This may be an important finding for health professionals and researchers to consider when working and researching with indigenous women from other countries.
Resumo:
This thesis takes a new data mining approach for analyzing road/crash data by developing models for the whole road network and generating a crash risk profile. Roads with an elevated crash risk due to road surface friction deficit are identified. The regression tree model, predicting road segment crash rate, is applied in a novel deployment coined regression tree extrapolation that produces a skid resistance/crash rate curve. Using extrapolation allows the method to be applied across the network and cope with the high proportion of missing road surface friction values. This risk profiling method can be applied in other domains.