374 resultados para Inter-hemispheric dynamic
Resumo:
The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.
Resumo:
There are several noninvasive techniques for assessing the kinetics of tear film, but no comparative studies have been conducted to evaluate their efficacies. Our aim is to test and compare techniques based on high-speed videokeratoscopy (HSV), dynamic wavefront sensing (DWS), and lateral shearing interferometry (LSI). Algorithms are developed to estimate the tear film build-up time TBLD, and the average tear film surface quality in the stable phase of the interblink interval TFSQAv. Moderate but significant correlations are found between TBLD measured with LSI and DWS based on vertical coma (Pearson's r2=0.34, p<0.01) and higher order rms (r2=0.31, p<0.01), as well as between TFSQAv measured with LSI and HSV (r2=0.35, p<0.01), and between LSI and DWS based on the rms fit error (r2=0.40, p<0.01). No significant correlation is found between HSV and DWS. All three techniques estimate tear film build-up time to be below 2.5 sec, and they achieve a remarkably close median value of 0.7 sec. HSV appears to be the most precise method for measuring tear film surface quality. LSI appears to be the most sensitive method for analyzing tear film build-up.
Resumo:
This study explores three-dimensional nonlineardynamic responses of typical tall buildings with and without setbacks under blast loading. These 20 storey reinforced concrete buildings have been designed for normal (dead, live and wind)loads. The influence of the setbacks on the lateral load response due to blasts in terms of peak deflections, accelerations, inter-storey drift and bending moments at critical locations (including hinge formation) were investigated. Structural response predictions were performed with a commercially available three-dimensional finite element analysis programme using non-linear direct integration time history analyses. Results obtained for buildings with different setbacks were compared and conclusions made. The comparisons revealed that buildings have setbacks that protect the tower part above the setback level from blast loading show considerably better response in terms of peak displacement and interstorey drift, when compared to buildings without setbacks. Rotational accelerations were found to depend on the periods of the rotational modes. Abrupt changes in moments and shears are experienced near the levels of the setbacks. Typical twenty storey tall buildings with shear walls and frames that are designed for only normaln loads perform reasonably well, without catastrophic collapse, when subjected to a blast that is equivalent to 500 kg TNT at a standoff distance of 10 m.
Resumo:
The purpose of this study is to contribute to the cross-disciplinary body of literature of identity and organisational culture. This study empirically investigated the Hatch and Schultz (2002) Organisational Identity Dynamics (OID) model to look at linkages between identity, image, and organisational culture. This study used processes defined in the OID model as a theoretical frame by which to understand the relationships between actual and espoused identity manifestations across visual identity, corporate identity, and organisational identity. The linking processes of impressing, mirroring, reflecting, and expressing were discussed at three unique levels in the organisation. The overarching research question of How does the organisational identity dynamics process manifest itself in practice at different levels within an organisation? was used as a means of providing empirical understanding to the previously theoretical OID model. Case study analysis was utilised to provide exploratory data across the organisational groups of: Level A - Senior Marketing and Corporate Communications Management, Level B - Marketing and Corporate Communications Staff, and Level C - Non-Marketing Managers and Employees. Data was collected via 15 in-depth interviews with documentary analysis used as a supporting mechanism to provide triangulation in analysis. Data was analysed against the impressing, mirroring, reflecting, and expressing constructs with specific criteria developed from literature to provide a detailed analysis of each process. Conclusions revealed marked differences in the ways in which OID processes occurred across different levels with implications for the ways in which VI, CI, and OI interact to develop holistic identity across organisational levels. Implications for theory detail the need to understand and utilise cultural understanding in identity programs as well as the value in developing identity communications which represent an actual rather than an espoused position.
Resumo:
The genetic structure of rice tungro bacilliform virus (RTBV) populations within and between growing sites was analyzed in a collection of natural field isolates from different rice varieties grown in eight tungro-endemic sites of the Philippines. Total DNA extracts from 345 isolates were digested with EcoRV restriction enzyme and hybridized with a full-length probe of RTBV, a procedure shown in preliminary experiments capable of revealing high levels of polymorphism in RTBV field isolates. In the total population, 17 distinct EcoRV-based genome profiles (genotypes) were identified and used as indicators for virus diversity. Distinct sets of genotypes occurred in Isabela and North Cotabato provinces suggesting a geographic isolation of virus populations. However, among the sites in each province, there were few significant differences in the genotype compositions of virus populations. The number of genotypes detected at a site varied from two to nine with a few genotypes dominating. In general the isolates at a site persisted from season to season indicating a genetic stability for the local virus population. Over the sampling time, IRRI rice varieties, which have green leafhopper resistance genes, supported similar virus populations to those supported by other varieties, indicating that the variety of the host exerted no apparent selection pressures. Insect transmission experiments on selected RTBV field isolates showed that dramatic shifts in genotype and phenotype distributions can occur in response to host /environmental shifts.
Resumo:
Ocean processes are dynamic and complex events that occur on multiple different spatial and temporal scales. To obtain a synoptic view of such events, ocean scientists focus on the collection of long-term time series data sets. Generally, these time series measurements are continually provided in real or near-real time by fixed sensors, e.g., buoys and moorings. In recent years, an increase in the utilization of mobile sensor platforms, e.g., Autonomous Underwater Vehicles, has been seen to enable dynamic acquisition of time series data sets. However, these mobile assets are not utilized to their full capabilities, generally only performing repeated transects or user-defined patrolling loops. Here, we provide an extension to repeated patrolling of a designated area. Our algorithms provide the ability to adapt a standard mission to increase information gain in areas of greater scientific interest. By implementing a velocity control optimization along the predefined path, we are able to increase or decrease spatiotemporal sampling resolution to satisfy the sampling requirements necessary to properly resolve an oceanic phenomenon. We present a path planning algorithm that defines a sampling path, which is optimized for repeatability. This is followed by the derivation of a velocity controller that defines how the vehicle traverses the given path. The application of these tools is motivated by an ongoing research effort to understand the oceanic region off the coast of Los Angeles, California. The computed paths are implemented with the computed velocities onto autonomous vehicles for data collection during sea trials. Results from this data collection are presented and compared for analysis of the proposed technique.
Resumo:
This paper presents Multi-Step A* (MSA*), a search algorithm based on A* for multi-objective 4D vehicle motion planning (three spatial and one time dimension). The research is principally motivated by the need for offline and online motion planning for autonomous Unmanned Aerial Vehicles (UAVs). For UAVs operating in large, dynamic and uncertain 4D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles and the rules of the air. It is shown that MSA* finds a cost optimal solution using variable length, angle and velocity trajectory segments. These segments are approximated with a grid based cell sequence that provides an inherent tolerance to uncertainty. Computational efficiency is achieved by using variable successor operators to create a multi-resolution, memory efficient lattice sampling structure. Simulation studies on the UAV flight planning problem show that MSA* meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of vector neighbourhood based A*.
Resumo:
An experimental programme in 2007 used three air suspended heavy vehicles travelling over typical urban roads to determine whether dynamic axle-to-chassis forces could be reduced by using larger-than-standard diameter longitudinal air lines. This paper presents methodology, interim analysis and partial results from that programme. Alterations to dynamic measures derived from axle-to-chassis forces for the case of standard-sized longitudinal air lines vs. the test case where larger longitudinal air lines were fitted are presented and discussed. This leads to conclusions regarding the possibility that dynamic loadings between heavy vehicle suspensions and chassis may be reduced by fitting larger longitudinal air lines to air-suspended heavy vehicles. Reductions in the shock and vibration loads to heavy vehicle suspension components could lead to lighter and more economical chassis and suspensions. This could therefore lead to reduced tare and increased payloads without an increase in gross vehicle mass.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Segmentation of novel or dynamic objects in a scene, often referred to as background sub- traction or foreground segmentation, is critical for robust high level computer vision applica- tions such as object tracking, object classifca- tion and recognition. However, automatic real- time segmentation for robotics still poses chal- lenges including global illumination changes, shadows, inter-re ections, colour similarity of foreground to background, and cluttered back- grounds. This paper introduces depth cues provided by structure from motion (SFM) for interactive segmentation to alleviate some of these challenges. In this paper, two prevailing interactive segmentation algorithms are com- pared; Lazysnapping [Li et al., 2004] and Grab- cut [Rother et al., 2004], both based on graph- cut optimisation [Boykov and Jolly, 2001]. The algorithms are extended to include depth cues rather than colour only as in the original pa- pers. Results show interactive segmentation based on colour and depth cues enhances the performance of segmentation with a lower er- ror with respect to ground truth.
Resumo:
Luxury is a quality that is difficult to define as the historical concept of luxury appears to be both dynamic and culturally specific. The everyday definition explains a ‘luxury’ in relation to a necessity: a luxury (product or service) is defined as something that consumers want rather than need. However, the growth of global markets has seen a boom in what are now referred to as ‘luxury brands’. This branding of products as luxury has resulted in a change in the way consumers understand luxury goods and services. In their attempts to characterize a luxury brand, Fionda & Moore in their article “The anatomy of a Luxury Brand” summarize a range of critical conditions that are in addition to product branding “... including product and design attributes of quality, craftsmanship and innovative, creative and unique products” (Fionda & Moore, 2009). For the purposes of discussing fashion design however, quality and craftsmanship are inseparable while creativity and innovation exist under different conditions. The terms ‘creative’ and ‘innovative’ are often used inter-changeably and are connected with most descriptions of the design process, defining ‘design’ and ‘fashion’ in many cases. Christian Marxt and Fredrik Hacklin identify this condition in their paper “Design, product development, innovation: all the same in the end?”(Marxt & Hacklin, 2005) and suggest that design communities should be aware that the distinction between these terms, whilst once quite definitive, is becoming narrow to a point where they will mean the same thing. In relation to theory building in the discipline this could pose significant problems. Brett Richards (2003) identifies innovation as different from creativity in that innovation aims to transform and implement rather than simply explore and invent. Considering this distinction, in particular relation to luxury branding, may affect the way in which design can contribute to a change in the way luxury fashion goods might be perceived in a polarised fashion market, namely suggesting that ‘luxury’ is what consumers need rather than the ‘pile it high, sell it cheap’ fashion that the current market dynamic would indicate they want. This paper attempts to explore the role of innovation as a key contributing factor in luxury concepts, in particular the relationship between innovation and creativity, the conditions which enable innovation, the role of craftsmanship in innovation and design innovation in relation to luxury fashion products. An argument is presented that technological innovation can be demonstrated as a common factor in the development of luxury fashion product and that the connection between designer and maker will play an important role in the development of luxury fashion goods for a sustainable fashion industry.
Resumo:
This chapter considers how teachers can make a difference to the kinds of literacy young people take up. Increasingly, researchers and policy-makers see literacy as an ensemble of socio-cultural situated practices rather than as a unitary skill. Accordingly, the differences in what young people come to do with literacy, in and out of school, confront us more directly. If literacy development involves assembling dynamic repertoires of practices, it is crucial to consider what different groups of children growing up and going to school in different places have access to and make investments in over time; the kinds of literate communities from which some are excluded or included; and how educators make a difference to the kinds of literate trajectories and identities young people put together.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
In this contribution, a stability analysis for a dynamic voltage restorer (DVR) connected to a weak ac system containing a dynamic load is presented using continuation techniques and bifurcation theory. The system dynamics are explored through the continuation of periodic solutions of the associated dynamic equations. The switching process in the DVR converter is taken into account to trace the stability regions through a suitable mathematical representation of the DVR converter. The stability regions in the Thevenin equivalent plane are computed. In addition, the stability regions in the control gains space, as well as the contour lines for different Floquet multipliers, are computed. Besides, the DVR converter model employed in this contribution avoids the necessity of developing very complicated iterative map approaches as in the conventional bifurcation analysis of converters. The continuation method and the DVR model can take into account dynamics and nonlinear loads and any network topology since the analysis is carried out directly from the state space equations. The bifurcation approach is shown to be both computationally efficient and robust, since it eliminates the need for numerically critical and long-lasting transient simulations.