957 resultados para Finite-dimensional spaces


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structures of proton-transfer compounds of 4,5-dichlorophthalic acid (DCPA) with the aliphatic Lewis bases triethylamine, diethylamine, n-butylamine and piperidine, namely triethylaminium 2-carboxy-4,5-dichlorobenzoate C~6~H~16~N^+^ C~8~H~3~Cl~2~O~4~^-^ (I), diethylaminium 2-carboxy-4,5-dichlorobenzoate C~4~H~12~N^+^ C~8~H~3~Cl~2~O~4~^-^ (II), bis(n-butylaminium) 4,5-dichlorophthalate monohydrate 2(C~4~H~12~N^+^) C~8~H~2~Cl~2~O~4~^2-^ . H~2~O (III) and bis(piperidinium) 4,5-dichlorophthalate monohydrate 2(C~5~H~12~N^+^) C~8~H~2~Cl~2~O~4~^2-^ . H~2~O (IV)have been determined at 200 K. All compounds have hydrogen-bonding associations giving in (I) discrete cation-anion units, linear chains in (II) while (III) and (IV) both have two-dimensional structures. In (I) a discrete cation-anion unit is formed through an asymmetric R2/1(4) N+-H...O,O' hydrogen-bonding association whereas in (II), one-dimensional chains are formed through linear N-H...O associations by both aminium H donors. In compounds (III) and (IV) the primary N-H...O linked cation-anion units are extended into a two-dimensional sheet structure via amide N-H...O(carboxyl) and ...O(carbonyl) interactions. In the 1:1 salts [(I) and (II)], the hydrogen 4,5-dichlorophthalate anions are essentially planar with short intramolecular carboxylic acid O-H...O(carboxyl) hydrogen bonds [O...O, 2.4223(14) and 2.388(2)A respectively]. This work provides a further example of the uncommon zero-dimensional hydrogen-bonded DCPA-Lewis base salt and the one-dimensional chain structure type, while even with the hydrate structures of the 1:2 salts with the primary and secondary amines, the low dimensionality generally associated with 1:1 DCPA salts is also found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the details of an investigation on the shear behaviour of a recently developed, cold-formed steel beam known as LiteSteel Beam (LSB).The LSB section has a unique shape of a channel beam with two rectangular hollow flanges and is produced by a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. In the present investigation, a series of numerical analyses based on three-dimensional finite element modeling and an experimental study were carried out to investigate the shear behaviour of 10 different LSB sections. It was found that the current design rules in cold-formed steel structures design codes are very conservative for the shear design of LiteSteel beams. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Therefore the design rules were further modified to include the available post-buckling strength. Suitable design rules were also developed under the direct strength method format. This paper presents the details of this investigation and the results including the final design rules for the shear capacity of LSBs. It also presents new shear strength formulae for lipped channel beams based on the current design equations for shear strength given in AISI (2007) using the same approach used for LSBs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the opportunities for social activities in public outdoor spaces associated with high-density residential living. This study surveyed activities in outdoor spaces outside three high-density residential communities in Brisbane. Results indicated that activity patterns in public outdoor space outside residential communities are different to general urban public outdoor space. This broadly but not fully supports current theories concerning activities in public space. That is some environmental factors have impacts on the level of social interaction. The relationship between outdoor space and a residential building may have a significant impact on the level of social activities. As a consequence, a new classification of activities in public space is suggested. In improving the level of social contact in public outdoor space outside a residential community, the challenge is how to encourage people to leave their comfortable homes and spend a short time in these public spaces. For residential buildings and public space to be treated as an integrated whole, the outdoor open spaces close to and surrounding these buildings must have a more welcoming design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the development of a three-dimensional (3D) imaging system for a 3500 tonne mining machine (dragline).Draglines are large walking cranes used for removing the dirt that covers a coal seam. Our group has been developing a dragline swing automation system since 1994. The system so far has been `blind' to its external environment. The work presented in this paper attempts to give the dragline an ability to sense its surroundings. A 3D digital terrain map (DTM) is created from data obtained from a two-dimensional laser scanner while the dragline swings. Experimental data from an operational dragline are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When the colonisers first came to Australia there was an urgent desire to map, name and settle. This desire, in part, stemmed from a fear of the unknown. Once these tasks were completed it was thought that a sense of identity and belonging would automatically come. In Anglo-Australian geography the map of Australia was always perceived in relationship to the larger map of Europe and Britain. The quicker Australia could be mapped the quicker its connection with the ‘civilised’ world could be established. Official maps could be taken up in official history books and a detailed monumental history could begin. Australians would feel secure in where they were placed in the world. However, this was not the case and anxieties about identity and belonging remained. One of the biggest hurdles was the fear of the open spaces and not knowing how to move across the land. Attempts to transpose colonisers’ use of space onto the Australian landscape did not work and led to confusion. Using authors who are often perceived as writers of national fictions (Henry Lawson, Barbara Baynton, Patrick White, David Malouf and Peter Carey) I will reveal how writing about space becomes a way to create a sense of belonging. It is through spatial knowledge and its application that we begin to gain a sense of closeness and identity. I will also look at how one of the greatest fears for the colonisers was the Aboriginal spatial command of the country. Aborigines already had a strongly developed awareness of spatial belonging and their stories reveal this authority (seen in the work of Lorna Little, Mick McLean) Colonisers attempted to discredit this knowledge but the stories and the land continue to recognise its legitimacy. From its beginning Australian spaces have been spaces of hybridity and the more the colonisers attempted to force predetermined structures onto these spaces the more hybrid they became.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Osteoporotic spinal fractures are a major concern in ageing Western societies. This study develops a multi-scale finite element (FE) model of the osteoporotic lumbar vertebral body to study the mechanics of vertebral compression fracture at both the apparent (whole vertebral body) and micro-structural (internal trabecular bone core)levels. Model predictions were verified against experimental data, and found to provide a reasonably good representation of the mechanics of the osteoporotic vertebral body. This novel modelling methodology will allow detailed investigation of how trabecular bone loss in osteoporosis affects vertebral stiffness and strength in the lumbar spine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanoindentation is a useful technique for probing the mechanical properties of bone, and finite element (FE) modeling of the indentation allows inverse determination of elasto-plastic constitutive properties. However, FE simulations to date have assumed frictionless contact between indenter and bone. The aim of this study was to explore the effect of friction in simulations of bone nanoindentation. Two dimensional axisymmetric FE simulations were performed using a spheroconical indenter of tip radius 0.6m and angle 90°. The coefficient of friction between indenter and bone was varied between 0.0 (frictionless) and 0.3. Isotropic linear elasticity was used in all simulations, with bone elastic modulus E=13.56GPa and Poisson’s ratio =0.3. Plasticity was incorporated using both Drucker-Prager and von Mises yield surfaces. Friction had a modest effect on the predicted force-indentation curve for both von Mises and Drucker-Prager plasticity, reducing maximum indenter displacement by 10% and 20% respectively as friction coefficient was increased from zero to 0.3 (at a maximum indenter force of 5mN). However, friction has a much greater effect on predicted pile-up after indentation, reducing predicted pile-up from 0.27m to 0.11m with a von Mises model, and from 0.09m to 0.02m with Drucker-Prager plasticity. We conclude that it is important to include friction in nanoindentation simulations of bone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent