82 resultados para High-frequency data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As financial markets have become increasingly integrated internationally, the topic of volatility transmission across these markets has become more important. This thesis investigates how the volatility patterns of the world's main financial centres differ across foreign exchange, equity, and bond markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Map-matching algorithms that utilise road segment connectivity along with other data (i.e.position, speed and heading) in the process of map-matching are normally suitable for high frequency (1 Hz or higher) positioning data from GPS. While applying such map-matching algorithms to low frequency data (such as data from a fleet of private cars, buses or light duty vehicles or smartphones), the performance of these algorithms reduces to in the region of 70% in terms of correct link identification, especially in urban and sub-urban road networks. This level of performance may be insufficient for some real-time Intelligent Transport System (ITS) applications and services such as estimating link travel time and speed from low frequency GPS data. Therefore, this paper develops a new weight-based shortest path and vehicle trajectory aided map-matching (stMM) algorithm that enhances the map-matching of low frequency positioning data on a road map. The well-known A* search algorithm is employed to derive the shortest path between two points while taking into account both link connectivity and turn restrictions at junctions. In the developed stMM algorithm, two additional weights related to the shortest path and vehicle trajectory are considered: one shortest path-based weight is related to the distance along the shortest path and the distance along the vehicle trajectory, while the other is associated with the heading difference of the vehicle trajectory. The developed stMM algorithm is tested using a series of real-world datasets of varying frequencies (i.e. 1 s, 5 s, 30 s, 60 s sampling intervals). A high-accuracy integrated navigation system (a high-grade inertial navigation system and a carrier-phase GPS receiver) is used to measure the accuracy of the developed algorithm. The results suggest that the algorithm identifies 98.9% of the links correctly for every 30 s GPS data. Omitting the information from the shortest path and vehicle trajectory, the accuracy of the algorithm reduces to about 73% in terms of correct link identification. The algorithm can process on average 50 positioning fixes per second making it suitable for real-time ITS applications and services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We employed a Hidden-Markov-Model (HMM) algorithm in loss of heterozygosity (LOH) analysis of high-density single nucleotide polymorphism (SNP) array data from Non-Hodgkin’s lymphoma (NHL) entities, follicular lymphoma (FL), and diffuse large B-cell lymphoma (DLBCL). This revealed a high frequency of LOH over the chromosomal region 11p11.2, containing the gene encoding the protein tyrosine phosphatase receptor type J (PTPRJ). Although PTPRJ regulates components of key survival pathways in B-cells (i.e., BCR, MAPK, and PI3K signaling), its role in B-cell development is poorly understood. LOH of PTPRJ has been described in several types of cancer but not in any hematological malignancy. Interestingly, FL cases with LOH exhibited down-regulation of PTPRJ, in contrast no significant variation of expression was shown in DLBCLs. In addition, sequence screening in Exons 5 and 13 of PTPRJ identified the G973A (rs2270993), T1054C (rs2270992), A1182C (rs1566734), and G2971C (rs4752904) coding SNPs (cSNPs). The A1182 allele was significantly more frequent in FLs and in NHLs with LOH. Significant over-representation of the C1054 (rs2270992) and the C2971 (rs4752904) alleles were also observed in LOH cases. A haplotype analysis also revealed a significant lower frequency of haplotype GTCG in NHL cases, but it was only detected in cases with retention. Conversely, haplotype GCAC was over-representated in cases with LOH. Altogether, these results indicate that the inactivation of PTPRJ may be a common lymphomagenic mechanism in these NHL subtypes and that haplotypes in PTPRJ gene may play a role in susceptibility to NHL, by affecting activation of PTPRJ in these B-cell lymphomas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In estuaries and natural water channels, the estimate of velocity and dispersion coefficients is critical to the knowledge of scalar transport and mixing. This estimate is rarely available experimentally at sub-tidal time scale in shallow water channels where high frequency is required to capture its spatio-temporal variation. This study estimates Lagrangian integral scales and autocorrelation curves, which are key parameters for obtaining velocity fluctuations and dispersion coefficients, and their spatio-temporal variability from deployments of Lagrangian drifters sampled at 10 Hz for a 4-hour period. The power spectral densities of the velocities between 0.0001 and 0.8 Hz were well fitted with a slope of 5/3 predicted by Kolmogorov’s similarity hypothesis within the inertial subrange, and were similar to the Eulerian power spectral previously observed within the estuary. The result showed that large velocity fluctuations determine the magnitude of the integral time scale, TL. Overlapping of short segments improved the stability of the estimate of TL by taking advantage of the redundant data included in the autocorrelation function. The integral time scales were about 20 s and varied by up to a factor of 8. These results are essential inputs for spatial binning of velocities, Lagrangian stochastic modelling and single particle analysis of the tidal estuary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Word frequency (WF) and strength effects are two important phenomena associated with episodic memory. The former refers to the superior hit-rate (HR) for low (LF) compared to high frequency (HF) words in recognition memory, while the latter describes the incremental effect(s) upon HRs associated with repeating an item at study. Using the "subsequent memory" method with event-related fMRI, we tested the attention-at-encoding (AE) [M. Glanzer, J.K. Adams, The mirror effect in recognition memory: data and theory, J. Exp. Psychol.: Learn Mem. Cogn. 16 (1990) 5-16] explanation of the WF effect. In addition to investigating encoding strength, we addressed if study involves accessing prior representations of repeated items via the same mechanism as that at test [J.L. McClelland, M. Chappell, Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory, Psychol. Rev. 105 (1998) 724-760], entailing recollection [K.J. Malmberg, J.E. Holden, R.M. Shiffrin, Modeling the effects of repetitions, similarity, and normative word frequency on judgments of frequency and recognition memory, J. Exp. Psychol.: Learn Mem. Cogn. 30 (2004) 319-331] and whether less processing effort is entailed for encoding each repetition [M. Cary, L.M. Reder, A dual-process account of the list-length and strength-based mirror effects in recognition, J. Mem. Lang. 49 (2003) 231-248]. The increased BOLD responses observed in the left inferior prefrontal cortex (LIPC) for the WF effect provide support for an AE account. Less effort does appear to be required for encoding each repetition of an item, as reduced BOLD responses were observed in the LIPC and left lateral temporal cortex; both regions demonstrated increased responses in the conventional subsequent memory analysis. At test, a left lateral parietal BOLD response was observed for studied versus unstudied items, while only medial parietal activity was observed for repeated items at study, indicating that accessing prior representations at encoding does not necessarily occur via the same mechanism as that at test, and is unlikely to involve a conscious recall-like process such as recollection. This information may prove useful for constraining cognitive theories of episodic memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis reports on the investigations, simulations and analyses of novel power electronics topologies and control strategies. The research is financed by an Australian Research Council (ARC) Linkage (07-09) grant. Therefore, in addition to developing original research and contributing to the available knowledge of power electronics, it also contributes to the design of a DC-DC converter for specific application to the auxiliary power supply in electric trains. Specifically, in this regard, it contributes to the design of a 7.5 kW DC-DC converter for the industrial partner (Schaffler and Associates Ltd) who supported this project. As the thesis is formatted as a ‘thesis by publication’, the contents are organized around published papers. The research has resulted in eleven papers, including seven peer reviewed and published conference papers, one published journal paper, two journal papers accepted for publication and one submitted journal paper (provisionally accepted subject to few changes). In this research, several novel DC-DC converter topologies are introduced, analysed, and tested. The similarity of all of the topologies devised lies in their ‘current circulating’ switching state, which allows them to store some energy in the inductor, as extra inductor current. The stored energy may be applied to enhance the performance of the converter in the occurrence of load current or input voltage disturbances. In addition, when there is an alternating load current, the ability to store energy allows the converter to perform satisfactorily despite frequently and highly varying load current. In this research, the capability of current storage has been utilised to design topologies for specific applications, and the enhancement of the performance of the considered applications has been illustrated. The simplest DC-DC converter topology, which has a ‘current circulating’ switching state, is the Positive Buck-Boost (PBB) converter (also known as the non-inverting Buck-Boost converter). Usually, the topology of the PBB converter is operating as a Buck or a Boost converter in applications with widely varying input voltage or output reference voltage. For example, in electric railways (the application of our industrial partner), the overhead line voltage alternates from 1000VDC to 500VDC and the required regulated voltage is 600VDC. In the course of this research, our industrial partner (Schaffler and Associates Ltd) industrialized a PBB converter–the ‘Mudo converter’–operating at 7.5 kW. Programming the onboard DSP and testing the PBB converter in experimental and nominal power and voltage was part of this research program. In the earlier stages of this research, the advantages and drawbacks of utilization of the ‘current circulating’ switching state in the positive Buck-Boost converter were investigated. In brief, the advantages were found to be robustness against input voltage and current load disturbances, and the drawback was extra conduction and switching loss. Although the robustness against disturbances is desirable for many applications, the price of energy loss must be minimized to attract attention to the utilization of the PBB converter. In further stages of this research, two novel control strategies for different applications were devised to minimise the extra energy loss while the advantages of the positive Buck-Boost converter were fully utilized. The first strategy is Smart Load Controller (SLC) for applications with pre-knowledge or predictability of input voltage and/or load current disturbances. A convenient example of these applications is electric/hybrid cars where a master controller commands all changes in loads and voltage sources. Therefore, the master controller has a pre-knowledge of the load and input voltage disturbances so it can apply the SLC strategy to utilize robustness of the PBB converter. Another strategy aiming to minimise energy loss and maximise the robustness in the face of disturbance is developed to cover applications with unexpected disturbances. This strategy is named Dynamic Hysteresis Band (DHB), and is used to manipulate the hysteresis band height after occurrence of disturbance to reduce dynamics of the output voltage. When no disturbance has occurred, the PBB converter works with minimum inductor current and minimum energy loss. New topologies based on the PBB converter have been introduced to address input voltage disturbances for different onboard applications. The research shows that the performance of applications of symmetrical/asymmetrical multi-level diode-clamped inverters, DC-networks, and linear-assisted RF amplifiers may be enhanced by the utilization of topologies based on the PBB converter. Multi-level diode-clamped inverters have the problem of DC-link voltage balancing when the power factor of their load closes to unity. This research has shown that this problem may be solved with a suitable multi-output DC-DC converter supplying DClink capacitors. Furthermore, the multi-level diode-clamped inverters supplied with asymmetrical DC-link voltages may improve the quality of load voltage and reduce the level of Electromagnetic Interference (EMI). Mathematical analyses and experiments on supplying symmetrical and asymmetrical multi-level inverters by specifically designed multi-output DC-DC converters have been reported in two journal papers. Another application in which the system performance can be improved by utilization of the ‘current circulating’ switching state is linear-assisted RF amplifiers in communicational receivers. The concept of ‘linear-assisted’ is to divide the signal into two frequency domains: low frequency, which should be amplified by a switching circuit; and the high frequency domain, which should be amplified by a linear amplifier. The objective is to minimize the overall power loss. This research suggests using the current storage capacity of a PBB based converter to increase its bandwidth, and to increase the domain of the switching converter. The PBB converter addresses the industrial demand for a DC-DC converter for the application of auxiliary power supply of a typical electric train. However, after testing the industrial prototype of the PBB converter, there were some voltage and current spikes because of switching. To attenuate this problem without significantly increasing the switching loss, the idea of Active Gate Signalling (AGS) is presented. AGS suggests a smart gate driver that selectively controls the switching process to reduce voltage/current spikes, without unacceptable reduction in the efficiency of switching.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Basic competencies in assessing and treating substance use disorders should be core to the training of any clinical psychologist, because of the high frequency of risky or problematic substance use in the community, and its high co-occurrence with other problems. Skills in establishing trust and a therapeutic alliance are particularly important in addiction, given the stigma and potential for legal sanctions that surround it. The knowledge and skills of all clinical practitioners should be sufficient to allow valid screening and diagnosis of substance use disorders, accurate estimation of consumption and a basic functional analysis. Practitioners should also be able to undertake brief interventions including motivational interviews, and appropriately apply generic interventions such as problem solving or goal setting to addiction. Furthermore, clinical psychologists should have an understanding of the nature, evidence base and indications for biochemical assays, pharmacotherapies and other medical treatments, and ways these can be integrated with psychological practice. Specialists in addiction should have more sophisticated competencies in each of these areas. They need to have a detailed understating of current addiction theories and basic and applied research, be able to undertake and report on a detailed psychological assessment, and display expert competence in addiction treatment. These skills should include an ability to assess and manage complex or co-occurring problems, to adapt interventions to the needs of different groups, and to assist people who have not responded to basic treatments. They should also be able to provide consultation to others, undertake evaluations of their practice, and monitor and evaluate emerging research data in the field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper assesses the capacity of high-frequency ultrasonic waves for detecting changes in the proteoglycan (PG) content of articular cartilage. 50 cartilage-on-bone samples were exposed to ultrasonic waves via an ultrasound transducer at a frequency of 20MHz. Histology and ImageJ processing were conducted to determine the PG content of the specimen. The ratios of the reflected signals from both the surface and the osteochondral junction (OCJ) were determined from the experimental data. The initial results show an inconsistency in the capacity of ultrasound to distinguish samples with severe proteoglycan loss (i.e. >90% PG loss) from the normal intact sample. This lack of clear distinction was also demonstrated at for samples with less than 60% depletion, while there is a clear differentiation between the normal intact sample and those with 55-70% PG loss.