928 resultados para SIZE-RAMSEY NUMBER


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research quantitatively examines the determinants of board size and the consequence it has on the performance of large companies in Australia. In line with international and the prevalent United States research the results suggest that there is no significant relationship between board size and their subsequent performance. In examining whether more complex operations require larger boards it was found that larger firms or firms with more lines of business tended to have more directors. Data analysis from the research supports the proposition that blockholders could affect management practices and that they enhances performance as measured by shareholder return.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With growing concern over the use of the car in our urbanized society, there have emerged a number of lobby groups and professional bodies promoting a return to public transport, walking and cycling, with the urban village as the key driving land use, as a means of making our cities’ transportation systems more sustainable. This research has aimed at developing a framework applicable to the Australian setting that can facilitate increased passenger patronage of rail based urban transport systems from adjacent or associated land uses. The framework specifically tested the application of the Park & Ride and Transit Oriented Development (TOD) concepts and their applicability within the cultural, institutional, political and transit operational characteristics of Australian society. The researcher found that, although the application of the TOD concept had been limited to small pockets of town houses and mixed use developments around stations, the development industry and emerging groups within the community are posed to embrace the concept and bring with it increased rail patronage. The lack of a clear commitment to infrastructure and supporting land uses is a major barrier to the implementation of TODs. The research findings demonstrated significant scope for the size of a TOD to expand to a much greater radius of activity from the public transport interchange, than the commonly quoted 400 to 600 meters, thus incorporating many more residents and potential patrons. The provision of Park & Rides, and associated support facilities like Kiss & Rides, have followed worldwide trends of high patronage demands from the middle and outer car dependent suburbs of our cities. The data collection and analysis gathered by the researcher demonstrated that in many cases Park & Rides should form part of a TOD to ensure ease of access to rail stations by all modes and patron types. The question, however, remains how best to plan the incorporation of a Park & Ride within a TOD and still maintain those features that attract and promote TODs as a living entity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is the first to investigate the effect of prolonged reading on reading performance and visual functions in students with low vision. The study focuses on one of the most common modes of achieving adequate magnification for reading by students with low vision, their close reading distance (proximal or relative distance magnification). Close reading distances impose high demands on near visual functions, such as accommodation and convergence. Previous research on accommodation in children with low vision shows that their accommodative responses are reduced compared to normal vision. In addition, there is an increased lag of accommodation for higher stimulus levels as may occur at close reading distance. Reduced accommodative responses in low vision and higher lag of accommodation at close reading distances together could impact on reading performance of students with low vision especially during prolonged reading tasks. The presence of convergence anomalies could further affect reading performance. Therefore, the aims of the present study were 1) To investigate the effect of prolonged reading on reading performance in students with low vision 2) To investigate the effect of prolonged reading on visual functions in students with low vision. This study was conducted as cross-sectional research on 42 students with low vision and a comparison group of 20 students with normal vision, aged 7 to 20 years. The students with low vision had vision impairments arising from a range of causes and represented a typical group of students with low vision, with no significant developmental delays, attending school in Brisbane, Australia. All participants underwent a battery of clinical tests before and after a prolonged reading task. An initial reading-specific history and pre-task measurements that included Bailey-Lovie distance and near visual acuities, Pelli-Robson contrast sensitivity, ocular deviations, sensory fusion, ocular motility, near point of accommodation (pull-away method), accuracy of accommodation (Monocular Estimation Method (MEM)) retinoscopy and Near Point of Convergence (NPC) (push-up method) were recorded for all participants. Reading performance measures were Maximum Oral Reading Rates (MORR), Near Text Visual Acuity (NTVA) and acuity reserves using Bailey-Lovie text charts. Symptoms of visual fatigue were assessed using the Convergence Insufficiency Symptom Survey (CISS) for all participants. Pre-task measurements of reading performance and accuracy of accommodation and NPC were compared with post-task measurements, to test for any effects of prolonged reading. The prolonged reading task involved reading a storybook silently for at least 30 minutes. The task was controlled for print size, contrast, difficulty level and content of the reading material. Silent Reading Rate (SRR) was recorded every 2 minutes during prolonged reading. Symptom scores and visual fatigue scores were also obtained for all participants. A visual fatigue analogue scale (VAS) was used to assess visual fatigue during the task, once at the beginning, once at the middle and once at the end of the task. In addition to the subjective assessments of visual fatigue, tonic accommodation was monitored using a photorefractor (PlusoptiX CR03™) every 6 minutes during the task, as an objective assessment of visual fatigue. Reading measures were done at the habitual reading distance of students with low vision and at 25 cms for students with normal vision. The initial history showed that the students with low vision read for significantly shorter periods at home compared to the students with normal vision. The working distances of participants with low vision ranged from 3-25 cms and half of them were not using any optical devices for magnification. Nearly half of the participants with low vision were able to resolve 8-point print (1M) at 25 cms. Half of the participants in the low vision group had ocular deviations and suppression at near. Reading rates were significantly reduced in students with low vision compared to those of students with normal vision. In addition, there were a significantly larger number of participants in the low vision group who could not sustain the 30-minute task compared to the normal vision group. However, there were no significant changes in reading rates during or following prolonged reading in either the low vision or normal vision groups. Individual changes in reading rates were independent of their baseline reading rates, indicating that the changes in reading rates during prolonged reading cannot be predicted from a typical clinical assessment of reading using brief reading tasks. Contrary to previous reports the silent reading rates of the students with low vision were significantly lower than their oral reading rates, although oral and silent reading was assessed using different methods. Although the visual acuity, contrast sensitivity, near point of convergence and accuracy of accommodation were significantly poorer for the low vision group compared to those of the normal vision group, there were no significant changes in any of these visual functions following prolonged reading in either group. Interestingly, a few students with low vision (n =10) were found to be reading at a distance closer than their near point of accommodation. This suggests a decreased sensitivity to blur. Further evaluation revealed that the equivalent intrinsic refractive errors (an estimate of the spherical dioptirc defocus which would be expected to yield a patient’s visual acuity in normal subjects) were significantly larger for the low vision group compared to those of the normal vision group. As expected, accommodative responses were significantly reduced for the low vision group compared to the expected norms, which is consistent with their close reading distances, reduced visual acuity and contrast sensitivity. For those in the low vision group who had an accommodative error exceeding their equivalent intrinsic refractive errors, a significant decrease in MORR was found following prolonged reading. The silent reading rates however were not significantly affected by accommodative errors in the present study. Suppression also had a significant impact on the changes in reading rates during prolonged reading. The participants who did not have suppression at near showed significant decreases in silent reading rates during and following prolonged reading. This impact of binocular vision at near on prolonged reading was possibly due to the high demands on convergence. The significant predictors of MORR in the low vision group were age, NTVA, reading interest and reading comprehension, accounting for 61.7% of the variances in MORR. SRR was not significantly influenced by any factors, except for the duration of the reading task sustained; participants with higher reading rates were able to sustain a longer reading duration. In students with normal vision, age was the only predictor of MORR. Participants with low vision also reported significantly greater visual fatigue compared to the normal vision group. Measures of tonic accommodation however were little influenced by visual fatigue in the present study. Visual fatigue analogue scores were found to be significantly associated with reading rates in students with low vision and normal vision. However, the patterns of association between visual fatigue and reading rates were different for SRR and MORR. The participants with low vision with higher symptom scores had lower SRRs and participants with higher visual fatigue had lower MORRs. As hypothesized, visual functions such as accuracy of accommodation and convergence did have an impact on prolonged reading in students with low vision, for students whose accommodative errors were greater than their equivalent intrinsic refractive errors, and for those who did not suppress one eye. Those students with low vision who have accommodative errors higher than their equivalent intrinsic refractive errors might significantly benefit from reading glasses. Similarly, considering prisms or occlusion for those without suppression might reduce the convergence demands in these students while using their close reading distances. The impact of these prescriptions on reading rates, reading interest and visual fatigue is an area of promising future research. Most importantly, it is evident from the present study that a combination of factors such as accommodative errors, near point of convergence and suppression should be considered when prescribing reading devices for students with low vision. Considering these factors would also assist rehabilitation specialists in identifying those students who are likely to experience difficulty in prolonged reading, which is otherwise not reflected during typical clinical reading assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

European American (EA) women report greater body dissatisfaction and less dietary control than do African American (AA) women. This study investigated whether ethnic differences in dieting history contributed to differences in body dissatisfaction and dietary control, or to differential changes that may occur during weight loss and regain. Eighty-nine EA and AA women underwent dual-energy X-ray absorptiometry to measure body composition and completed questionnaires to assess body dissatisfaction and dietary control before, after, and one year following, a controlled weight-loss intervention. While EA women reported a more extensive dieting history than AA women, this difference did not contribute to ethnic differences in body dissatisfaction and perceived dietary control. During weight loss, body satisfaction improved more for AA women, and during weight regain, dietary self-efficacy worsened to a greater degree for EA women. Ethnic differences in dieting history did not contribute significantly to these differential changes. Although ethnic differences in body image and dietary control are evident prior to weight loss, and some change differentially by ethnic group during weight loss and regain, differences in dieting history do not contribute significantly to ethnic differences in body image and dietary control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lifecycle funds offered by retirement plan providers allocate aggressively to risky asset classes when the employee participants are young, gradually switching to more conservative asset classes as they grow older and approach retirement. This approach focuses on maximizing growth of the accumulation fund in the initial years and preserving its value in the later years. The authors simulate terminal wealth outcomes based on conventional lifecycle asset allocation rules as well as on contrarian strategies that reverse the direction of asset switching. The evidence suggests that the growth in portfolio size over time significantly impacts the asset allocation decision. Due to the portfolio size effect that is observed by the authors, the terminal value of accumulation in retirement accounts is influenced more by the asset allocation strategy adopted in later years relative to that adopted in early years. By mechanistically switching to conservative assets in the later years of a plan, lifecycle strategies sacrifice significant growth opportunity and prove counterproductive to the participant's wealth accumulation objective. The authors' conclude that this sacrifice does not seem to be compensated adequately in terms of reducing the risk of potentially adverse outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formation of hypertrophic scars is a frequent medical outcome of wound repair and often requires further therapy with treatments such as Silicone Gel Sheets (SGS) or apoptosis-inducing agents, including bleomycin. Although widely used, knowledge regarding SGS and their mode of action is limited. Preliminary research has shown that small amounts of amphiphilic silicone present in SGS have the ability to move into skin during treatment. We demonstrate herein that a commercially available analogue of these amphiphilic siloxane species, the rake copolymer GP226, decreases collagen synthesis upon exposure to cultures of fibroblasts derived from hypertrophic scars (HSF). By size exclusion chromatography, GP226 was found to be a mixture of siloxane species, containing five fractions of different molecular weight. By studies of collagen production, cell viability and proliferation, it was revealed that a low molecular weight fraction (fraction IV) was the most active, reducing the number of viable cells present following treatment and thereby reducing collagen production as a result. Upon exposure of fraction IV to human keratinocytes, viability and proliferation was also significantly affected. HSF undergoing apoptosis after application of fraction IV were also detected via real-time microscopy and by using the TUNEL assay. Taken together, these data suggests that these amphiphilic siloxanes could be potential non-invasive substitutes to apoptotic-inducing chemical agents that are currently used as scar treatments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of theses for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of ten published /submitted papers and book chapters of which nine have been published and one is under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of investigating multilevel topologies for high quality and high power applications, with specific emphasis on renewable energy systems. The rapid evolution of renewable energy within the last several years has resulted in the design of efficient power converters suitable for medium and high-power applications such as wind turbine and photovoltaic (PV) systems. Today, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements controlled by powerful processor systems. However, it is hard to connect the traditional converters to the high and medium voltage grids, as a single power switch cannot stand at high voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Besides this important feature, multilevel converters have the capability to generate stepped waveforms. Consequently, in comparison with conventional two-level inverters, they present lower switching losses, lower voltage stress across loads, lower electromagnetic interference (EMI) and higher quality output waveforms. These properties enable the connection of renewable energy sources directly to the grid without using expensive, bulky, heavy line transformers. Additionally, they minimize the size of the passive filter and increase the durability of electrical devices. However, multilevel converters have only been utilised in very particular applications, mainly due to the structural limitations, high cost and complexity of the multilevel converter system and control. New developments in the fields of power semiconductor switches and processors will favor the multilevel converters for many other fields of application. The main application for the multilevel converter presented in this work is the front-end power converter in renewable energy systems. Diode-clamped and cascade converters are the most common type of multilevel converters widely used in different renewable energy system applications. However, some drawbacks – such as capacitor voltage imbalance, number of components, and complexity of the control system – still exist, and these are investigated in the framework of this thesis. Various simulations using software simulation tools are undertaken and are used to study different cases. The feasibility of the developments is underlined with a series of experimental results. This thesis is divided into two main sections. The first section focuses on solving the capacitor voltage imbalance for a wide range of applications, and on decreasing the complexity of the control strategy on the inverter side. The idea of using sharing switches at the output structure of the DC-DC front-end converters is proposed to balance the series DC link capacitors. A new family of multioutput DC-DC converters is proposed for renewable energy systems connected to the DC link voltage of diode-clamped converters. The main objective of this type of converter is the sharing of the total output voltage into several series voltage levels using sharing switches. This solves the problems associated with capacitor voltage imbalance in diode-clamped multilevel converters. These converters adjust the variable and unregulated DC voltage generated by renewable energy systems (such as PV) to the desirable series multiple voltage levels at the inverter DC side. A multi-output boost (MOB) converter, with one inductor and series output voltage, is presented. This converter is suitable for renewable energy systems based on diode-clamped converters because it boosts the low output voltage and provides the series capacitor at the output side. A simple control strategy using cross voltage control with internal current loop is presented to obtain the desired voltage levels at the output voltage. The proposed topology and control strategy are validated by simulation and hardware results. Using the idea of voltage sharing switches, the circuit structure of different topologies of multi-output DC-DC converters – or multi-output voltage sharing (MOVS) converters – have been proposed. In order to verify the feasibility of this topology and its application, steady state and dynamic analyses have been carried out. Simulation and experiments using the proposed control strategy have verified the mathematical analysis. The second part of this thesis addresses the second problem of multilevel converters: the need to improve their quality with minimum cost and complexity. This is related to utilising asymmetrical multilevel topologies instead of conventional multilevel converters; this can increase the quality of output waveforms with a minimum number of components. It also allows for a reduction in the cost and complexity of systems while maintaining the same output quality, or for an increase in the quality while maintaining the same cost and complexity. Therefore, the asymmetrical configuration for two common types of multilevel converters – diode-clamped and cascade converters – is investigated. Also, as well as addressing the maximisation of the output voltage resolution, some technical issues – such as adjacent switching vectors – should be taken into account in asymmetrical multilevel configurations to keep the total harmonic distortion (THD) and switching losses to a minimum. Thus, the asymmetrical diode-clamped converter is proposed. An appropriate asymmetrical DC link arrangement is presented for four-level diode-clamped converters by keeping adjacent switching vectors. In this way, five-level inverter performance is achieved for the same level of complexity of the four-level inverter. Dealing with the capacitor voltage imbalance problem in asymmetrical diodeclamped converters has inspired the proposal for two different DC-DC topologies with a suitable control strategy. A Triple-Output Boost (TOB) converter and a Boost 3-Output Voltage Sharing (Boost-3OVS) converter connected to the four-level diode-clamped converter are proposed to arrange the proposed asymmetrical DC link for the high modulation indices and unity power factor. Cascade converters have shown their abilities and strengths in medium and high power applications. Using asymmetrical H-bridge inverters, more voltage levels can be generated in output voltage with the same number of components as the symmetrical converters. The concept of cascading multilevel H-bridge cells is used to propose a fifteen-level cascade inverter using a four-level H-bridge symmetrical diode-clamped converter, cascaded with classical two-level Hbridge inverters. A DC voltage ratio of cells is presented to obtain maximum voltage levels on output voltage, with adjacent switching vectors between all possible voltage levels; this can minimize the switching losses. This structure can save five isolated DC sources and twelve switches in comparison to conventional cascade converters with series two-level H bridge inverters. To increase the quality in presented hybrid topology with minimum number of components, a new cascade inverter is verified by cascading an asymmetrical four-level H-bridge diode-clamped inverter. An inverter with nineteen-level performance was achieved. This synthesizes more voltage levels with lower voltage and current THD, rather than using a symmetrical diode-clamped inverter with the same configuration and equivalent number of power components. Two different predictive current control methods for the switching states selection are proposed to minimise either losses or THD of voltage in hybrid converters. High voltage spikes at switching time in experimental results and investigation of a diode-clamped inverter structure raised another problem associated with high-level high voltage multilevel converters. Power switching components with fast switching, combined with hard switched-converters, produce high di/dt during turn off time. Thus, stray inductance of interconnections becomes an important issue and raises overvoltage and EMI issues correlated to the number of components. Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. The effect of different transient current loops on busbar physical structure of the high-voltage highlevel diode-clamped converters is highlighted. Design considerations of proper planar busbar are also presented to optimise the overall design of diode-clamped converters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following Youngjohn, Lees-Haley, and Binder's (1999) comment on Johnson and Lesniak-Karpiak's (1997) study that warnings lead to more subtle malingering, researchers have sought to better understand warning effects. However, such studies have been largely atheoretical and may have confounded warning and coaching. This study examined the effect on malingering of a warning that was based on criminological-sociological concepts derived from the rational choice model of deterrence theory. A total of 78 participants were randomly assigned to a control group, an unwarned simulator group, or one of two warned simulator groups. The warning groups comprised low- and high-level conditions depending on warning intensity. Simulator participants received no coaching about how to fake tests. Outcome variables were scores derived from the Test of Memory Malingering and Wechsler Memory Scale-III. When the rate of malingering was compared across the four groups, a high-level warning effect was found such that warned participants were significantly less likely to exaggerate than unwarned simulators. In an exploratory follow-up analysis, the warned groups were divided into those who reported malingering and those who did not report malingering, and the performance of these groups was compared to that of unwarned simulators and controls. Using this approach, results showed that participants who were deterred from malingering by warning performed no worse than controls. However, on a small number of tests, self-reported malingerers in the low-level warning group appeared less impaired than unwarned simulators. This pattern was not observed in the high-level warning condition. Although cautious interpretation of findings is necessitated by the exploratory nature of some analyses, overall results suggest that using a carefully designed warning may be useful for reducing the rate of malingering. The combination of some noteworthy effect sizes, despite low power and the small size of some groups, suggests that further investigation of the effects of warnings needs to continue to determine their effect more fully.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents the design and implementation of a trusted sensor node that provides Internet-grade security at low system cost. We describe trustedFleck, which uses a commodity Trusted Platform Module (TPM) chip to extend the capabilities of a standard wireless sensor node to provide security services such as message integrity, confidentiality, authenticity, and system integrity based on RSA public-key and XTEA-based symmetric-key cryptography. In addition trustedFleck provides secure storage of private keys and provides platform configuration registers (PCRs) to store system configurations and detect code tampering. We analyze system performance using metrics that are important for WSN applications such as computation time, memory size, energy consumption and cost. Our results show that trustedFleck significantly outperforms previous approaches (e.g., TinyECC) in terms of these metrics while providing stronger security levels. Finally, we describe a number of examples, built on trustedFleck, of symmetric key management, secure RPC, secure software update, and remote attestation.