951 resultados para error-location number
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Near work may play an important role in the development of myopia in the younger population. The prevalence of myopia has also been found to be higher in occupations that involve substantial near work tasks, for example in microscopists and textile workers. When nearwork is performed, it typically involves accommodation, convergence and downward gaze. A number of previous studies have examined the effects of accommodation and convergence on changes in the optics and biometrics of the eye in primary gaze. However, little is known about the influence of accommodation on the eye in downward gaze. This thesis is primarily concerned with investigating the changes in the eye during near work in downward gaze under natural viewing conditions. To measure wavefront aberrations in downward gaze under natural viewing conditions, we modified a commercial Shack-Hartmann wavefront sensor by adding a relay lens system to allow on-axis ocular aberration measurements in primary gaze and downward gaze, with binocular fixation. Measurements with the modified wavefront sensor in primary and downward gaze were validated against a conventional aberrometer using both a model eye and in 9 human subjects. We then conducted an experiment to investigate changes in ocular aberrations associated with accommodation in downward gaze over 10 mins in groups of both myopes (n = 14) and emmetropes (n =12) using the modified Shack-Hartmann wavefront sensor. During the distance accommodation task, small but significant changes in refractive power (myopic shift) and higher order aberrations were observed in downward gaze compared to primary gaze. Accommodation caused greater changes in higher order aberrations (in particular coma and spherical aberration) in downward gaze than primary gaze, and there was evidence that the changes in certain aberrations with accommodation over time were different in downward gaze compared to primary gaze. There were no obvious systematic differences in higher order aberrations between refractive error groups during accommodation or downward gaze for fixed pupils. However, myopes exhibited a significantly greater change in higher order aberrations (in particular spherical aberration) than emmetropes for natural pupils after 10 mins of a near task (5 D accommodation) in downward gaze. These findings indicated that ocular aberrations change from primary to downward gaze, particularly with accommodation. To understand the mechanism underlying these changes in greater detail, we then extended this work to examine the characteristics of the corneal optics, internal optics, anterior biometrics and axial length of the eye during a near task, in downward gaze, over 10 mins. Twenty young adult subjects (10 emmetropes and 10 myopes) participated in this study. To measure corneal topography and ocular biometrics in downward gaze, a rotating Scheimpflug camera and an optical biometer were inclined on a custom built, height and tilt adjustable table. We found that both corneal optics and internal optics change with downward gaze, resulting in a myopic shift (~0.10 D) in the spherical power of the eye. The changes in corneal optics appear to be due to eyelid pressure on the anterior surface of the cornea, whereas the changes in the internal optics (an increase in axial length and a decrease in anterior chamber depth) may be associated with movement of the crystalline lens, under the action of gravity, and the influence of altered biomechanical forces from the extraocular muscles on the globe with downward gaze. Changes in axial length with accommodation were significantly greater in downward gaze than primary gaze (p < 0.05), indicating an increased effect of the mechanical forces from the ciliary muscle and extraocular muscles. A subsequent study was conducted to investigate the changes in anterior biometrics, axial length and choroidal thickness in nine cardinal gaze directions under the actions of the extraocular muscles. Ocular biometry measurements were obtained from 30 young adults (10 emmetropes, 10 low myopes and 10 moderate myopes) through a rotating prism with 15° deviation, along the foveal axis, using a non-contact optical biometer in each of nine different cardinal directions of gaze, over 5 mins. There was a significant influence of gaze angle and time on axial length (both p < 0.001), with the greatest axial elongation (+18 ± 8 μm) occurring with infero-nasal gaze (p < 0.001) and a slight decrease in axial length in superior gaze (−12 ± 17 μm) compared with primary gaze (p < 0.001). There was a significant correlation between refractive error (spherical equivalent refraction) and the mean change in axial length in the infero-nasal gaze direction (Pearson's R2 = 0.71, p < 0.001). To further investigate the relative effect of gravity and extraocular muscle force on the axial length, we measured axial length in 15° and 25° downward gaze with the biometer inclined on a tilting table that allowed gaze shifts to occur with either full head turn but no eye turn (reflects the effect of gravity), or full eye turn with no head turn (reflects the effect of extraocular muscle forces). We observed a significant axial elongation in 15° and 25° downward gaze in the full eye turn condition. However, axial length did not change significantly in downward gaze over 5 mins (p > 0.05) in the full head turn condition. The elongation of the axial length in downward gaze appears to be due to the influence of the extraocular muscles, since the effect was not present when head turn was used instead of eye turn. The findings of these experiments collectively show the dynamic characteristics of the optics and biometrics of the eye in downward gaze during a near task, over time. These were small but significant differences between myopic and emmetropic eyes in both the optical and biomechanical changes associated with shifts of gaze direction. These differences between myopes and emmetropes could arise as a consequence of excessive eye growth associated with myopia. However the potentially additive effects of repeated or long lasting near work activities employing infero-nasal gaze could also act to promote elongation of the eye due to optical and/or biomechanical stimuli.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
Speaker attribution is the task of annotating a spoken audio archive based on speaker identities. This can be achieved using speaker diarization and speaker linking. In our previous work, we proposed an efficient attribution system, using complete-linkage clustering, for conducting attribution of large sets of two-speaker telephone data. In this paper, we build on our proposed approach to achieve a robust system, applicable to multiple recording domains. To do this, we first extend the diarization module of our system to accommodate multi-speaker (>2) recordings. We achieve this through using a robust cross-likelihood ratio (CLR) threshold stopping criterion for clustering, as opposed to the original stopping criterion of two speakers used for telephone data. We evaluate this baseline diarization module across a dataset of Australian broadcast news recordings, showing a significant lack of diarization accuracy without previous knowledge of the true number of speakers within a recording. We thus propose applying an additional pass of complete-linkage clustering to the diarization module, demonstrating an absolute improvement of 20% in diarization error rate (DER). We then evaluate our proposed multi-domain attribution system across the broadcast news data, demonstrating achievable attribution error rates (AER) as low as 17%.
Resumo:
This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.
Resumo:
Recent road safety statistics show that the decades-long fatalities decreasing trend is stopping and stagnating. Statistics further show that crashes are mostly driven by human error, compared to other factors such as environmental conditions and mechanical defects. Within human error, the dominant error source is perceptive errors, which represent about 50% of the total. The next two sources are interpretation and evaluation, which accounts together with perception for more than 75% of human error related crashes. Those statistics show that allowing drivers to perceive and understand their environment better, or supplement them when they are clearly at fault, is a solution to a good assessment of road risk, and, as a consequence, further decreasing fatalities. To answer this problem, currently deployed driving assistance systems combine more and more information from diverse sources (sensors) to enhance the driver's perception of their environment. However, because of inherent limitations in range and field of view, these systems' perception of their environment remains largely limited to a small interest zone around a single vehicle. Such limitations can be overcomed by increasing the interest zone through a cooperative process. Cooperative Systems (CS), a specific subset of Intelligent Transportation Systems (ITS), aim at compensating for local systems' limitations by associating embedded information technology and intervehicular communication technology (IVC). With CS, information sources are not limited to a single vehicle anymore. From this distribution arises the concept of extended or augmented perception. Augmented perception allows extending an actor's perceptive horizon beyond its "natural" limits not only by fusing information from multiple in-vehicle sensors but also information obtained from remote sensors. The end result of an augmented perception and data fusion chain is known as an augmented map. It is a repository where any relevant information about objects in the environment, and the environment itself, can be stored in a layered architecture. This thesis aims at demonstrating that augmented perception has better performance than noncooperative approaches, and that it can be used to successfully identify road risk. We found it was necessary to evaluate the performance of augmented perception, in order to obtain a better knowledge on their limitations. Indeed, while many promising results have already been obtained, the feasibility of building an augmented map from exchanged local perception information and, then, using this information beneficially for road users, has not been thoroughly assessed yet. The limitations of augmented perception, and underlying technologies, have not be thoroughly assessed yet. Most notably, many questions remain unanswered as to the IVC performance and their ability to deliver appropriate quality of service to support life-saving critical systems. This is especially true as the road environment is a complex, highly variable setting where many sources of imperfections and errors exist, not only limited to IVC. We provide at first a discussion on these limitations and a performance model built to incorporate them, created from empirical data collected on test tracks. Our results are more pessimistic than existing literature, suggesting IVC limitations have been underestimated. Then, we develop a new CS-applications simulation architecture. This architecture is used to obtain new results on the safety benefits of a cooperative safety application (EEBL), and then to support further study on augmented perception. At first, we confirm earlier results in terms of crashes numbers decrease, but raise doubts on benefits in terms of crashes' severity. In the next step, we implement an augmented perception architecture tasked with creating an augmented map. Our approach is aimed at providing a generalist architecture that can use many different types of sensors to create the map, and which is not limited to any specific application. The data association problem is tackled with an MHT approach based on the Belief Theory. Then, augmented and single-vehicle perceptions are compared in a reference driving scenario for risk assessment,taking into account the IVC limitations obtained earlier; we show their impact on the augmented map's performance. Our results show that augmented perception performs better than non-cooperative approaches, allowing to almost tripling the advance warning time before a crash. IVC limitations appear to have no significant effect on the previous performance, although this might be valid only for our specific scenario. Eventually, we propose a new approach using augmented perception to identify road risk through a surrogate: near-miss events. A CS-based approach is designed and validated to detect near-miss events, and then compared to a non-cooperative approach based on vehicles equiped with local sensors only. The cooperative approach shows a significant improvement in the number of events that can be detected, especially at the higher rates of system's deployment.
Resumo:
Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
Individual variability in the acquisition, consolidation and extinction of conditioned fear potentially contributes to the development of fear pathology including posttraumatic stress disorder (PTSD). Pavlovian fear conditioning is a key tool for the study of fundamental aspects of fear learning. Here, we used a selected mouse line of High and Low Pavlovian conditioned fear created from an advanced intercrossed line (AIL) in order to begin to identify the cellular basis of phenotypic divergence in Pavlovian fear conditioning. We investigated whether phosphorylated MAPK (p44/42 ERK/MAPK), a protein kinase required in the amygdala for the acquisition and consolidation of Pavlovian fear memory, is differentially expressed following Pavlovian fear learning in the High and Low fear lines. We found that following Pavlovian auditory fear conditioning, High and Low line mice differ in the number of pMAPK-expressing neurons in the dorsal sub nucleus of the lateral amygdala (LAd). In contrast, this difference was not detected in the ventral medial (LAvm) or ventral lateral (LAvl) amygdala sub nuclei or in control animals. We propose that this apparent increase in plasticity at a known locus of fear memory acquisition and consolidation relates to intrinsic differences between the two fear phenotypes. These data provide important insights into the micronetwork mechanisms encoding phenotypic differences in fear. Understanding the circuit level cellular and molecular mechanisms that underlie individual variability in fear learning is critical for the development of effective treatment of fear-related illnesses such as PTSD.
Resumo:
The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.
Resumo:
The 2 hour game jam was performed as part of the State Library of Queensland 'Garage Gamer' series of events, summer 2013, at the SLQ exhibition. An aspect of the exhibition was the series of 'Level Up' game nights. We hosted the first of these - under the auspices of brIGDA, Game On. It was a party - but the focal point of the event was a live streamed 2 hour game jam. Game jams have become popular amongst the game development and design community in recent years, particularly with the growth of the Global Game Jam, a yearly event which brings thousands of game makers together across different sites in different countries. Other established jams take place on-line, for example the Ludum Dare challenge which as been running since 2002. Other challenges follow the same model in more intimate circumstances and it is now common to find institutions and groups holding their own small local game making jams. There are variations around the format, some jams are more competitive than others for example, but a common aspect is the creation of an intense creative crucible centred around team work and ‘accelerated game development’. Works (games) produced during these intense events often display more experimental qualities than those undertaken as commercial projects. In part this is because the typical jam is started with a conceptual design brief, perhaps a single word, or in the case of the specific game jam described in this paper, three words. Teams have to envision the challenge key word/s as a game design using whatever skills and technologies they can and produce a finished working game in the time given. Game jams thus provide design researchers with extraordinary fodder and recent years have also seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). The 2 hour game jam held during the SLQ Garage Gamer program was all about social experience.
Resumo:
IEEE 802.11p is the new standard for Inter-Vehicular Communications (IVC) using the 5.9 GHz frequency band, as part of the DSRC framework; it will enable applications based on Cooperative Systems. Simulation is widely used to estimate or verify the potential benefits of such cooperative applications, notably in terms of safety for the drivers. We have developed a performance model for 802.11p that can be used by simulations of cooperative applications (e.g. collision avoidance) without requiring intricate models of the whole IVC stack. Instead, it provide a a straightforward yet realistic modelisation of IVC performance. Our model uses data from extensive field trials to infer the correlation between speed, distance and performance metrics such as maximum range, latency and frame loss. Then, we improve this model to limit the number of profiles that have to be generated when there are more than a few couples of emitter-receptor in a given location. Our model generates realistic performance for rural or suburban environments among small groups of IVC-equipped vehicles and road side units.
Resumo:
Despite the existence of air quality guidelines in Australia and New Zealand, the concentrations of particulate matter have exceeded these guidelines on several occasions. To identify the sources of particulate matter, examine the contributions of the sources to the air quality at specific areas and estimate the most likely locations of the sources, a growing number of source apportionment studies have been conducted. This paper provides an overview of the locations of the studies, salient features of the results obtained and offers some perspectives for the improvement of future receptor modelling of air quality in these countries. The review revealed that because of its advantages over alternative models, Positive Matrix Factorisation (PMF) was the most commonly applied model in the studies. Although there were differences in the sources identified in the studies, some general trends were observed. While biomass burning was a common problem in both countries, the characteristics of this source varied from one location to another. In New Zealand, domestic heating was the highest contributor to particle levels on days when the guidelines were exceeded. On the other hand, forest back-burning was a concern in Brisbane while marine aerosol was a major source in most studies. Secondary sulphate, traffic emissions, industrial emissions and re-suspended soil were also identified as important sources. Some unique species, for example, volatile organic compounds and particle size distribution were incorporated into some of the studies with results that have significant ramifications for the improvement of air quality. Overall, the application of source apportionment models provided useful information that can assist the design of epidemiological studies and refine air pollution reduction strategies in Australia and New Zealand.
Resumo:
Lean body mass (LBM) and muscle mass remains difficult to quantify in large epidemiological studies due to non-availability of inexpensive methods. We therefore developed anthropometric prediction equations to estimate the LBM and appendicular lean soft tissue (ALST) using dual energy X-ray absorptiometry (DXA) as a reference method. Healthy volunteers (n= 2220; 36% females; age 18-79 y) representing a wide range of body mass index (14-44 kg/m2) participated in this study. Their LBM including ALST was assessed by DXA along with anthropometric measurements. The sample was divided into prediction (60%) and validation (40%) sets. In the prediction set, a number of prediction models were constructed using DXA measured LBM and ALST estimates as dependent variables and a combination of anthropometric indices as independent variables. These equations were cross-validated in the validation set. Simple equations using age, height and weight explained > 90% variation in the LBM and ALST in both men and women. Additional variables (hip and limb circumferences and sum of SFTs) increased the explained variation by 5-8% in the fully adjusted models predicting LBM and ALST. More complex equations using all the above anthropometric variables could predict the DXA measured LBM and ALST accurately as indicated by low standard error of the estimate (LBM: 1.47 kg and 1.63 kg for men and women, respectively) as well as good agreement by Bland Altman analyses. These equations could be a valuable tool in large epidemiological studies assessing these body compartments in Indians and other population groups with similar body composition.