936 resultados para swd: High Dynamic Range


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical fibre strain sensors using Fibre Bragg Gratings (FBGs) are poised to play a major role in structural health monitoring in a variety of application from aerospace to civil engineering. At the heart of technology is the optoelectronic instrumentation required to convert optical signals into measurands. Users are demanding compact, lightweight, rugged and low cost solutions. This paper describes development of a new device based on a blazed FBG and CCD array that can potentially meet the above demands. We have shown that this very low cost technique may be used to interrogate a WDM array of sensor gratings with highly accurate and highly repeatable results unaffected by the polarisation state of the radiation. In this paper, we present results showing that sensors may be interrogated with an RMS error of 1.7pm, drift below 0.12pm and dynamic range of up to 65nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical fibre strain sensors using Fibre Bragg Gratings (FBGs) are poised to play a major role in structural health monitoring in a variety of application from aerospace to civil engineering. At the heart of technology is the optoelectronic instrumentation required to convert optical signals into measurands. Users are demanding compact, lightweight, rugged and low cost solutions. This paper describes development of a new device based on a blazed FBG and CCD array that can potentially meet the above demands. We have shown that this very low cost technique may be used to interrogate a WDM array of sensor gratings with highly accurate and highly repeatable results unaffected by the polarisation state of the radiation. In this paper, we present results showing that sensors may be interrogated with an RMS error of 1.7pm, drift below 0.12pm and dynamic range of up to 65nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noise and vibration in complex ship structures are becoming a prominent issue for ship building industry and ship companies due to the constant demand of building faster ships of lighter weight, and the stringent noise and libration regulation of the industry. In order to retain the full benefit of building faster ships without compromising too much on ride comfort and safety, noise and vibration control needs to be implemented. Due to the complexity of ship structures, the coupling of different wave types and multiple wave propagation paths, active control of global hull modes is difficult to implement and very expensive. Traditional passive control such as adding damping materials is only effective in the high frequency range. However, most severe damage to ship structures is caused by large structural deformation of hull structures and high dynamic stress concentration at low frequencies. The most discomfort and fatigue of passengers and the crew onboard ships is also due to the low frequency noise and vibration. Innovative approaches are therefore, required to attenuate the noise and vibration at low frequencies. This book was developed from several specialized research topics on vibration and vibration control of ship structures, mostly from the author's own PhD work at the University of Western Australia. The book aims to provide a better understanding of vibration characteristics of ribbed plate structures, plate/plate coupled structures and the mechanism governing wave propagation and attenuation in periodic and irregular ribbed structures as well as in complex ship structures. The book is designed to be a reference book for ship builders, vibro-acoustic engineers and researchers. The author also hopes that the book can stimulate more exciting future work in this area of research. It is the author's humble desire that the book can be some use for those who purchase it. This book is divided into eight chapters. Each chapter focuses on providing solution to address a particular issue on vibration problems of ship structures. A brief summary of each chapter is given in the general introduction. All chapters are inter-dependent to each other to form an integration volume on the subject of vibration and vibration control of ship structures and alike. I am in debt to many people in completing this work. In particular, I would like to thank Professor J. Pan, Dr N.H. Farag, Dr K. Sum and many others from the University of Western Australia for useful advices and helps during my times at the University and beyond. I would also like to thank my wife, Miaoling Wang, my children, Anita, Sophia and Angela Lin, for their sacrifice and continuing supports to make this work possible. Financial supports from Australian Research Council, Australian Defense Science and Technology Organization and Strategic Marine Pty Ltd at Western Australia for this work is gratefully acknowledged.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid growth of mobile telephone use, satellite services, and now the wireless Internet and WLANs are generating tremendous changes in telecommunication and networking. As indoor wireless communications become more prevalent, modeling indoor radio wave propagation in populated environments is a topic of significant interest. Wireless MIMO communication exploits phenomena such as multipath propagation to increase data throughput and range, or reduce bit error rates, rather than attempting to eliminate effects of multipath propagation as traditional SISO communication systems seek to do. The MIMO approach can yield significant gains for both link and network capacities, with no additional transmitting power or bandwidth consumption when compared to conventional single-array diversity methods. When MIMO and OFDM systems are combined and deployed in a suitable rich scattering environment such as indoors, a significant capacity gain can be observed due to the assurance of multipath propagation. Channel variations can occur as a result of movement of personnel, industrial machinery, vehicles and other equipment moving within the indoor environment. The time-varying effects on the propagation channel in populated indoor environments depend on the different pedestrian traffic conditions and the particular type of environment considered. A systematic measurement campaign to study pedestrian movement effects in indoor MIMO-OFDM channels has not yet been fully undertaken. Measuring channel variations caused by the relative positioning of pedestrians is essential in the study of indoor MIMO-OFDM broadband wireless networks. Theoretically, due to high multipath scattering, an increase in MIMO-OFDM channel capacity is expected when pedestrians are present. However, measurements indicate that some reductions in channel capacity could be observed as the number of pedestrians approaches 10 due to a reduction in multipath conditions as more human bodies absorb the wireless signals. This dissertation presents a systematic characterization of the effects of pedestrians in indoor MIMO-OFDM channels. Measurement results, using the MIMO-OFDM channel sounder developed at the CSIRO ICT Centre, have been validated by a customized Geometric Optics-based ray tracing simulation. Based on measured and simulated MIMO-OFDM channel capacity and MIMO-OFDM capacity dynamic range, an improved deterministic model for MIMO-OFDM channels in indoor populated environments is presented. The model can be used for the design and analysis of future WLAN to be deployed in indoor environments. The results obtained show that, in both Fixed SNR and Fixed Tx for deterministic condition, the channel capacity dynamic range rose with the number of pedestrians as well as with the number of antenna combinations. In random scenarios with 10 pedestrians, an increment in channel capacity of up to 0.89 bits/sec/Hz in Fixed SNR and up to 1.52 bits/sec/Hz in Fixed Tx has been recorded compared to the one pedestrian scenario. In addition, from the results a maximum increase in average channel capacity of 49% has been measured while 4 antenna elements are used, compared with 2 antenna elements. The highest measured average capacity, 11.75 bits/sec/Hz, corresponds to the 4x4 array with 10 pedestrians moving randomly. Moreover, Additionally, the spread between the highest and lowest value of the the dynamic range is larger for Fixed Tx, predicted 5.5 bits/sec/Hz and measured 1.5 bits/sec/Hz, in comparison with Fixed SNR criteria, predicted 1.5 bits/sec/Hz and measured 0.7 bits/sec/Hz. This has been confirmed by both measurements and simulations ranging from 1 to 5, 7 and 10 pedestrians.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Located at the intersection of two vulnerable groups in the contemporary labour market, young people who migrate as refugees during adolescence face a unique constellation of opportunities and challenges that shape their employment trajectories. Yet the tendency for research to focus on the early years of refugee settlement means that we have an inadequate understanding the factors that mediate their employment decisions, experiences and outcomes. Based on interviews with 51 young people, this article explores how aspirations, responsibilities, family, education and networks are understood to influence the employment trajectories of adolescent refugee migrants. While this article draws attention to the complex and dynamic range of challenges and constraints that these young people negotiate in the pursuit of satisfying and sustainable employment, what also emerges is an optimistic and determined cohort who, even as they at times unsuccessfully prepare for and navigate the labour market, maintain high hopes for a better life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Positron emission tomography (PET) is an imaging technique in which radioactive positron-emitting tracers are used to study biochemical and physiological functions in humans and in animal experiments. The use of PET imaging has increased rapidly in recent years, as have special requirements in the fields of neurology and oncology for the development of syntheses for new, more specific and selective radiotracers. Synthesis development and automation are necessary when high amounts of radioactivity are needed for multiple PET studies. In addition, preclinical studies using experimental animal models are necessary for evaluating the suitability of new PET tracers for humans. For purification and analysing the labelled end-product, an effective radioanalytical method combined with an optimal radioactivity detection technique is of great importance. In this study, a fluorine-18 labelling synthesis method for two tracers was developed and optimized, and the usefulness of these tracers for possible prospective human studies was evaluated. N-(3-[18F]fluoropropyl)-2β-carbomethoxy-3β-(4-fluorophenyl)nortropane ([18F]β-CFT-FP) is a candidate PET tracer for the dopamine transporter (DAT), and 1H-1-(3-[18F]fluoro-2-hydroxypropyl)-2-nitroimidazole ([18F]FMISO) is a well-known hypoxia marker for hypoxic but viable cells in tumours. The methodological aim of this thesis was to evaluate the status of thin-layer chromatography (TLC) combined with proper radioactivity detection measurement systems as a radioanalytical method. Three different detection methods of radioactivity were compared: radioactivity scanning, film autoradiography, and digital photostimulated luminescence (PSL) autoradiography. The fluorine-18 labelling synthesis for [18F]β-CFT-FP was developed and carbon-11 labelled [11C]β-CFT-FP was used to study the specificity of β-CFT-FP for the DAT sites in human post-mortem brain slices. These in vitro studies showed that β-CFT-FP binds to the caudate-putamen, an area rich of DAT. The synthesis of fluorine-18 labelled [18F]FMISO was optimized, and the tracer was prepared using an automated system with good and reproducible yields. In preclinical studies, the action of the radiation sensitizer estramustine phosphate on the radiation treatment and uptake of [18F]FMISO was evaluated, with results of great importance for later human studies. The methodological part of this thesis showed that radioTLC is the method of choice when combined with an appropriate radioactivity detection technique. Digital PSL autoradiography proved to be the most appropriate when compared to the radioactivity scanning and film autoradiography methods. The very high sensitivity, good resolution, and wide dynamic range of digital PSL autoradiography are its advantages in detection of β-emitting radiolabelled substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The prognosis of patients with glioblastoma, the most malignant adult glial brain tumor, remains poor in spite of advances in treatment procedures, including surgical resection, irradiation and chemotherapy.Genetic heterogeneity of glioblastoma warrants extensive studies in order to gain a thorough understanding of the biology of this tumor. While there have been several studies of global transcript profiling of glioma with the identification of gene signatures for diagnosis and disease management, translation into clinics is yet to happen. Serum biomarkers have the potential to revolutionize the process of cancer diagnosis, grading, prognostication and treatment response monitoring. Besides having the advantage that serum can be obtained through a less invasive procedure, it contains molecules at an extraordinary dynamic range of ten orders of magnitude in terms of their concentrations. While the conventional methods, such as 2DE, have been in use for many years, the ability to identify the proteins through mass spectrometry techniques such as MALDI-TOF led to an explosion of interest in proteomics. Relatively new high-throughput proteomics methods such as SELDI-TOF and protein microarrays are expected to hasten the process of serum biomarker discovery. This review will highlight the recent advances in the proteomics platform in discovering serum biomarkers and the current status of glioma serum markers. We aim to provide the principles and potential of the latest proteomic approaches and their applications in the biomarker discovery process. Besides providing a comprehensive list of available serum biomarkers of glioma, we will also propose how these markers will revolutionize the clinical management of glioma patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a low power gas sensor system on CMOS platform consisting of micromachined polysilicon microheater, temperature controller circuit, resistance readout circuit and SnO2 transducer film. The design criteria for different building blocks of the system is elaborated The microheaters are optimized for temperature uniformity as well as static and dynamic response. The electrical equivalent model for the microheater is derived by extracting thermal and mechanical poles through extensive laser doppler vibrometer measurements. The temperature controller and readout circuit are realized on 130nm CMOS technology The temperature controller re-uses the heater as a temperature sensor and controls the duty cycle of the waveform driving the gate of the power MOSFET which supplies heater current. The readout circuit, with subthreshold operation of the MOSFETs, is based oil resistance to time period conversion followed by frequency to digital converter Subthreshold operatin of MOSFETs coupled with sub-ranging technique, achieves ultra low power consumption with more than five orders of magnitude dynamic range RF sputtered SnO2 film is optimized for its microstructure to achive high sensitivity to sense LPG gas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure and operation of CdTe, CdZnTe and Si pixel detectors based on crystalline semiconductors, bump bonding and CMOS technology and developed mainly at Oy Simage Ltd. And Oy Ajat Ltd., Finland for X- and gamma ray imaging are presented. This detector technology evolved from the development of Si strip detectors at the Finnish Research Institute for High Energy Physics (SEFT) which later merged with other physics research units to form the Helsinki Institute of Physics (HIP). General issues of X-ray imaging such as the benefits of the method of direct conversion of X-rays to signal charge in comparison to the indirect method and the pros and cons of photon counting vs. charge integration are discussed. A novel design of Si and CdTe pixel detectors and the analysis of their imaging performance in terms of SNR, MTF, DQE and dynamic range are presented in detail. The analysis shows that directly converting crystalline semiconductor pixel detectors operated in the charge integration mode can be used in X-ray imaging very close to the theoretical performance limits in terms of efficiency and resolution. Examples of the application of the developed imaging technology to dental intra oral and panoramic and to real time X-ray imaging are given. A CdTe photon counting gamma imager is introduced. A physical model to calculate the photo peak efficiency of photon counting CdTe pixel detectors is developed and described in detail. Simulation results indicates that the charge sharing phenomenon due to diffusion of signal charge carriers limits the pixel size of photon counting detectors to about 250 μm. Radiation hardness issues related to gamma and X-ray imaging detectors are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on a new high-frequency (HF) link dc-to-three-phase-ac power converter. The least number of switching devices among other HF link dc-to-three-phase-ac converters, improved power density due to the absence of devices of bidirectional voltage-blocking capability, simple commutation requirements, and isolation between input and output are the integral features of this topology. The commutation process of the converter requires zero portions in the link voltage. This causes a nonlinear distortion in the output three-phase voltages. The mathematical analysis is carried out to investigate the problem, and suitable compensation in modulating signal is proposed for different types of carrier. Along with the modified modulator structure, a synchronously rotating reference-frame-based control scheme is adopted for the three-phase ac side in order to achieve high dynamic performance. The effectiveness of the proposed scheme has been investigated and verified through computer simulations and experimental results with 1-kVA prototype.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Field emission from carbon nanotubes (CNTs) in the form of arrays or thin films give rise to several strongly correlated process of electromechanical interaction and degradation. Such processes are mainly due to (1) electron-phonon interaction (2) electromechanical force field leading to stretching of CNTs (3) ballistic transport induced thermal spikes, coupled with high dynamic stress, leading to degradation of emission performance at the device scale. Fairly detailed physics based models of CNTs considering the aspects (1) and (2) above have already been developed by these authors, and numerical results indicate good agreement with experimental results. What is missing in such a system level modeling approach is the incorporation of structural defects and vacancies or charge impurities. This is a practical and important problem due to the fact that degradation of field emission performance is indeed observed in experimental I-V curves. What is not clear from these experiments is whether such degradation in the I-V response is due to dynamic reorientation of the CNTs or due to the defects or due to both of these effects combined. Non-equilibrium Green’s function based simulations using a tight-binding Hamiltonian for single CNT segment show up the localization of carrier density at various locations of the CNTs. About 11% decrease in the drive current with steady difference in the drain current in the range of 0.2-0.4V of the gate voltage was reported in literature when negative charge impurity was introduced at various locations of the CNT over a length of ~20nm. In the context of field emission from CNT tips, a simplistic estimate of defects have been introduced by a correction factor in the Fowler-Nordheim formulae. However, a more detailed physics based treatment is required, while at the same time the device-scale simulation is necessary. The novelty of our present approach is the following. We employ a concept of effective stiffness degradation for segments of CNTs, which is due to structural defects, and subsequently, we incorporate the vacancy defects and charge impurity effects in the Green’s function based approach. Field emission induced current-voltage characteristics of a vertically aligned CNT array on a Cu-Cr substrate is then simulated using a detailed nonlinear mechanistic model of CNTs coupled with quantum hydrodynamics. An array of 10 vertically aligned and each 12 m long CNTs is considered for the device scale analysis. Defect regions are introduced randomly over the CNT length. The result shows the decrease in the longitudinal strain due to defects. Contrary to the expected influence of purely mechanical degradation, this result indicates that the charge impurity and hence weaker transport can lead to a different electromechanical force field, which ultimately can reduce the strain. However, there could be significant fluctuation in such strain field due to electron-phonon coupling. The effect of such fluctuations (with defects) is clearly evident in the field emission current history. The average current also decreases significantly due to such defects.