938 resultados para planets and satellites: fundamental parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Volume resuscitation is one of the primary therapeutic goals in hemorrhagic shock, but data on microcirculatory effects of different colloidal fluid resuscitation regimen are sparse. We investigated sublingual mucosal microcirculatory parameters during hemorrhage and after fluid resuscitation with gelatin, hydroxyethyl starch, or hypertonic saline and hydroxyethyl starch in pigs. METHODS: To induce hemorrhagic shock, 60% of calculated blood volume was withdrawn. Microvascular blood flow was assessed by laser Doppler velocimetry. Microcirculatory hemoglobin oxygen saturation was measured with a tissue reflectance spectrophotometry, and side darkfield imaging was used to visualize the microcirculation and to quantify the flow quality. Systemic hemodynamic variables, systemic acid base and blood gas variables, and lactate measurements were recorded. Measurements were performed at baseline, after hemorrhage, and after fluid resuscitation with a fixed volume regimen. RESULTS: Systemic hemodynamic parameters returned or even exceeded to baseline values in all three groups after fluid resuscitation, but showed significantly higher filling pressures and cardiac output values in animals treated with isotonic colloids. Microcirculatory parameters determined in gelatin and hydroxyethyl starch resuscitated animals, and almost all parameters except microvascular hemoglobin oxygen saturation in animals treated with hypertonic saline and hydroxyethyl starch, were restored after treatment. DISCUSSION: Hemorrhaged pigs can be hemodynamically stabilized with either isotonic or hypertonic colloidal fluids. The main finding is an adequate restoration of sublingual microcirculatory blood flow and flow quality in all three study groups, but only gelatin and hydroxyethyl starch improved microvascular hemoglobin oxygen saturation, indicating some inadequate oxygen supply/demand ratio maybe due to a better restoration of systemic hemodynamics in isotonic colloidal resuscitated animals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To study the time course of demineralization and fracture incidence after spinal cord injury (SCI), 100 paraplegic men with complete motor loss were investigated in a cross-sectional study 3 months to 30 years after their traumatic SCI. Fracture history was assessed and verified using patients' files and X-rays. BMD of the lumbar spine (LS), femoral neck (FN), distal forearm (ultradistal part = UDR, 1/3 distal part = 1/3R), distal tibial diaphysis (TDIA), and distal tibial epiphysis (TEPI) was measured using DXA. Stiffness of the calcaneus (QUI.CALC), speed of sound of the tibia (SOS.TIB), and amplitude-dependent SOS across the proximal phalanges (adSOS.PHAL) were measured using QUS. Z-Scores of BMD and quantitative ultrasound (QUS) were plotted against time-since-injury and compared among four groups of paraplegics stratified according to time-since-injury (<1 year, stratum I; 1-9 years, stratum II; 10-19 years, stratum III; 20-29 years, stratum IV). Biochemical markers of bone turnover (deoxypyridinoline/creatinine (D-pyr/Cr), osteocalcin, alkaline phosphatase) and the main parameters of calcium phosphate metabolism were measured. Fifteen out of 98 paraplegics had sustained a total of 39 fragility fractures within 1,010 years of observation. All recorded fractures were fractures of the lower limbs, mean time to first fracture being 8.9 +/- 1.4 years. Fracture incidence increased with time-after-SCI, from 1% in the first 12 months to 4.6%/year in paraplegics since >20 years ( p<.01). The overall fracture incidence was 2.2%/year. Compared with nonfractured paraplegics, those with a fracture history had been injured for a longer time ( p<.01). Furthermore, they had lower Z-scores at FN, TEPI, and TDIA ( p<.01 to <.0001), the largest difference being observed at TDIA, compared with the nonfractured. At the lower limbs, BMD decreased with time at all sites ( r=.49 to.78, all p<.0001). At FN and TEPI, bone loss followed a log curve which leveled off between 1 to 3 years after injury. In contrast, Z-scores of TDIA continuously decreased even beyond 10 years after injury. LS BMD Z-score increased with time-since-SCI ( p<.05). Similarly to DXA, QUS allowed differentiation of early and rapid trabecular bone loss (QUI.CALC) vs slow and continuous cortical bone loss (SOS.TIB). Biochemical markers reflected a disproportion between highly elevated bone resorption and almost normal bone formation early after injury. Turnover declined following a log curve with time-after-SCI, however, D-pyr/Cr remained elevated in 30% of paraplegics injured >10 years. In paraplegic men early (trabecular) and persistent (cortical) bone loss occurs at the lower limbs and leads to an increasing fracture incidence with time-after-SCI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contribution of Starlette, Stella, and AJI-SAI is currently neglected when defining the International Terrestrial Reference Frame, despite a long time series of precise SLR observations and a huge amount of available data. The inferior accuracy of the orbits of low orbiting geodetic satellites is the main reason for this neglect. The Analysis Centers of the International Laser Ranging Service (ILRS ACs) do, however, consider including low orbiting geodetic satellites for deriving the standard ILRS products based on LAGEOS and Etalon satellites, instead of the sparsely observed, and thus, virtually negligible Etalons. We process ten years of SLR observations to Starlette, Stella, AJISAI, and LAGEOS and we assess the impact of these Low Earth Orbiting (LEO) SLR satellites on the SLR-derived parameters. We study different orbit parameterizations, in particular different arc lengths and the impact of pseudo-stochastic pulses and dynamical orbit parameters on the quality of the solutions. We found that the repeatability of the East and North components of station coordinates, the quality of polar coordinates, and the scale estimates of the reference are improved when combining LAGEOS with low orbiting SLR satellites. In the multi-SLR solutions, the scale and the Z component of geocenter coordinates are less affected by deficiencies in solar radiation pressure modeling than in the LAGEOS-1/2 solutions, due to substantially reduced correlations between the Z geocenter coordinate and empirical orbit parameters. Eventually, we found that the standard values of Center-of-mass corrections (CoM) for geodetic LEO satellites are not valid for the currently operating SLR systems. The variations of station-dependent differential range biases reach 52 and 25 mm for AJISAI and Starlette/Stella, respectively, which is why estimating station dependent range biases or using station-dependent CoM, instead of one value for all SLR stations, is strongly recommended.This clearly indicates that the ILRS effort to produce CoM corrections for each satellite, which are site-specific and depend on the system characteristics at the time of tracking,is very important and needs to be implemented in the SLR data analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PLATO 2.0 has recently been selected for ESA’s M3 launch opportunity (2022/24). Providing accurate key planet parameters (radius, mass, density and age) in statistical numbers, it addresses fundamental questions such as: How do planetary systems form and evolve? Are there other systems with planets like ours, including potentially habitable planets? The PLATO 2.0 instrument consists of 34 small aperture telescopes (32 with 25 s readout cadence and 2 with 2.5 s candence) providing a wide field-of-view (2232 deg 2) and a large photometric magnitude range (4–16 mag). It focusses on bright (4–11 mag) stars in wide fields to detect and characterize planets down to Earth-size by photometric transits, whose masses can then be determined by ground-based radial-velocity follow-up measurements. Asteroseismology will be performed for these bright stars to obtain highly accurate stellar parameters, including masses and ages. The combination of bright targets and asteroseismology results in high accuracy for the bulk planet parameters: 2 %, 4–10 % and 10 % for planet radii, masses and ages, respectively. The planned baseline observing strategy includes two long pointings (2–3 years) to detect and bulk characterize planets reaching into the habitable zone (HZ) of solar-like stars and an additional step-and-stare phase to cover in total about 50 % of the sky. PLATO 2.0 will observe up to 1,000,000 stars and detect and characterize hundreds of small planets, and thousands of planets in the Neptune to gas giant regime out to the HZ. It will therefore provide the first large-scale catalogue of bulk characterized planets with accurate radii, masses, mean densities and ages. This catalogue will include terrestrial planets at intermediate orbital distances, where surface temperatures are moderate. Coverage of this parameter range with statistical numbers of bulk characterized planets is unique to PLATO 2.0. The PLATO 2.0 catalogue allows us to e.g.: - complete our knowledge of planet diversity for low-mass objects, - correlate the planet mean density-orbital distance distribution with predictions from planet formation theories,- constrain the influence of planet migration and scattering on the architecture of multiple systems, and - specify how planet and system parameters change with host star characteristics, such as type, metallicity and age. The catalogue will allow us to study planets and planetary systems at different evolutionary phases. It will further provide a census for small, low-mass planets. This will serve to identify objects which retained their primordial hydrogen atmosphere and in general the typical characteristics of planets in such low-mass, low-density range. Planets detected by PLATO 2.0 will orbit bright stars and many of them will be targets for future atmosphere spectroscopy exploring their atmosphere. Furthermore, the mission has the potential to detect exomoons, planetary rings, binary and Trojan planets. The planetary science possible with PLATO 2.0 is complemented by its impact on stellar and galactic science via asteroseismology as well as light curves of all kinds of variable stars, together with observations of stellar clusters of different ages. This will allow us to improve stellar models and study stellar activity. A large number of well-known ages from red giant stars will probe the structure and evolution of our Galaxy. Asteroseismic ages of bright stars for different phases of stellar evolution allow calibrating stellar age-rotation relationships. Together with the results of ESA’s Gaia mission, the results of PLATO 2.0 will provide a huge legacy to planetary, stellar and galactic science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The upconversion quantum yield (UCQY) is one of the most significant parameters for upconverter materials. A high UCQY is essential for a succesful integration of upconversion in many applications, such as harvesting of the solar radiation. However, little is known about which doping level of the rare-earth ions yields the highest UCQY in the different host lattices and what are the underlying causes. Here, we investigate which Er3+ doping yields the highest UCQY in the host lattices β-NaYF4 and Gd2O2S under 4I15/2 → 4I13/2 excitation. We show for both host lattices that the optimum Er3+ doping is not fixed and it actually decreases as the irradiance of the excitation increases. To find the optimum Er3+ doping for a given irradiance, we determined the peak position of the internal UCQY as a function of the average Er−Er distance. For this purpose, we used a fit on experimental data, where the average Er−Er distance was calculated from the Er3+ doping of the upconverter samples and the lattice parameters of the host materials. We observe optimum average Er−Er distances for the host lattices β-NaYF4 and Gd2O2S with differences <14% at the same irradiance levels, whereas the optimum Er3+ doping are around 2× higher for β-NaYF4 than for Gd2O2S. Estimations by extrapolation to higher irradiances indicate that the optimum average Er−Er distance converges to values around 0.88 and 0.83 nm for β-NaYF4 and Gd2O2S, respectively. Our findings point to a fundamental relationship and focusing on the average distance between the active rare-earth ions might be a very efficient way to optimize the doping of rare-earth ions with regard to the highest achievable UCQY.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gravity field parameters are usually determined from observations of the GRACE satellite mission together with arc-specific parameters in a generalized orbit determination process. When separating the estimation of gravity field parameters from the determination of the satellites’ orbits, correlations between orbit parameters and gravity field coefficients are ignored and the latter parameters are biased towards the a priori force model. We are thus confronted with a kind of hidden regularization. To decipher the underlying mechanisms, the Celestial Mechanics Approach is complemented by tools to modify the impact of the pseudo-stochastic arc-specific parameters on the normal equations level and to efficiently generate ensembles of solutions. By introducing a time variable a priori model and solving for hourly pseudo-stochastic accelerations, a significant reduction of noisy striping in the monthly solutions can be achieved. Setting up more frequent pseudo-stochastic parameters results in a further reduction of the noise, but also in a notable damping of the observed geophysical signals. To quantify the effect of the a priori model on the monthly solutions, the process of fixing the orbit parameters is replaced by an equivalent introduction of special pseudo-observations, i.e., by explicit regularization. The contribution of the thereby introduced a priori information is determined by a contribution analysis. The presented mechanism is valid universally. It may be used to separate any subset of parameters by pseudo-observations of a special design and to quantify the damage imposed on the solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quarks were introduced 50 years ago opening the road towards our understanding of the elementary constituents of matter and their fundamental interactions. Since then, a spectacular progress has been made with important discoveries that led to the establishment of the Standard Theory that describes accurately the basic constituents of the observable matter, namely quarks and leptons, interacting with the exchange of three fundamental forces, the weak, electromagnetic and strong force. Particle physics is now entering a new era driven by the quest of understanding of the composition of our Universe such as the unobservable (dark) matter, the hierarchy of masses and forces, the unification of all fundamental interactions with gravity in a consistent quantum framework, and several other important questions. A candidate theory providing answers to many of these questions is string theory that replaces the notion of point particles by extended objects, such as closed and open strings. In this short note, I will give a brief overview of string unification, describe in particular how quarks and leptons can emerge and discuss what are possible predictions for particle physics and cosmology that could test these ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quarks were introduced 50 years ago opening the road towards our understanding of the elementary constituents of matter and their fundamental interactions. Since then, a spectacular progress has been made with important discoveries that led to the establishment of the Standard Theory that describes accurately the basic constituents of the observable matter, namely quarks and leptons, interacting with the exchange of three fundamental forces, the weak, electromagnetic and strong force. Particle physics is now entering a new era driven by the quest of understanding of the composition of our Universe such as the unobservable (dark) matter, the hierarchy of masses and forces, the unification of all fundamental interactions with gravity in a consistent quantum framework, and several other important questions. A candidate theory providing answers to many of these questions is string theory that replaces the notion of point particles by extended objects, such as closed and open strings. In this short note, I will give a brief overview of string unification, describe in particular how quarks and leptons can emerge and discuss what are possible predictions for particle physics and cosmology that could test these ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last 20 years directed shark and ray fishery has increased alarmingly everywhere in the world. For most species though, no data on growth rate, mortality, fecundity and other life history aspects exist as of now and management of the fishery is therefore insufficient. Also there still exist methodological difficulties in the age determination of elasmobranchs fishes, a fact which complicates the investigation of growth parameters. This study tried to identify the best ageing methods and estimate growth parameters for ten skate species of the genus Bathyraja, all occurring in the southwest Atlantic in depths of 50m and more. 720 samples were collected on board of argentine research vessels in between 2003 and 2005. Crystal violet and a new staining method using potassium permanganate, both applied on sagittal sections of vertebral centra, proved to be most effective in enhancing the banding pattern in most of the species. Thorns were also tested and readings were consistent with the ones made on vertebral sections. Growth parameters could be derived for six species and for the other four estimates could be made. Growth rate as well as infinite length varied between species, with those attaining bigger sizes having lower growth rates. No latitudinal differences in growth rate could be detected but a comparison with samples from other studies showed that total lengths were always reported to be higher around the Malvinas Islands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a vibrationally corrected 3D potential energy surface determined with highly correlated ab initio calculations (CCSD(T)), the lowest vibrational energies of two dimethyl-ether isotopologues, 12CH3–16O–12CD3 (DME-d3) and 12CD3–16O–12CD3 (DME-d6), are computed variationally. The levels that can be populated at very low temperatures correspond to the COC-bending and the two methyl torsional modes. Molecular symmetry groups are used for the classification of levels and torsional splittings. DME-d6 belongs to the G36 group, as the most abundant isotopologue 12CH3–16O–12CH3 (DME-h6), while DME-d3 is a G18 species. Previous assignments of experimental Raman and far-infrared spectra are discussed from an effective Hamiltonian obtained after refining the ab initio parameters. Because a good agreement between calculated and experimental transition frequencies is reached, new assignments are proposed for various combination bands corresponding to the two deuterated isotopologues and for the 020 → 030 transition of DME-d6. Vibrationally corrected potential energy barriers, structural parameters, and anharmonic spectroscopic parameters are provided. For the 3N – 9 neglected vibrational modes, harmonic and anharmonic fundamental frequencies are obtained using second-order perturbation theory by means of CCSD and MP2 force fields. Fermi resonances between the COC-bending and the torsional modes modify DME-d3 intensities and the band positions of the torsional overtones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gender detection is a very important objective to improve efficiency in tasks as speech or speaker recognition, among others. Traditionally gender detection has been focused on fundamental frequency (f0) and cepstral features derived from voiced segments of speech. The methodology presented here consists in obtaining uncorrelated glottal and vocal tract components which are parameterized as mel-frequency coefficients. K-fold and cross-validation using QDA and GMM classifiers showed that better detection rates are reached when glottal source and vocal tract parameters are used in a gender-balanced database of running speech from 340 speakers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente proyecto parte de un programa utilizado en las prácticas de laboratorio en la asignatura Antenas y Compatibilidad Electromagnética del sexto semestre llamado SABOR, que pretende ser actualizado para que en las nuevas versiones de los sistemas operativos ofrecidos por la compañía Windows pueda ser operativo. El objetivo principal será diseñar e implementar nuevas funcionalidades así como desarrollar mejoras y corregir errores del mismo. Para su mejor entendimiento se ha creado una herramienta en entorno MATLAB para analizar uno de los tipos más comunes de Apertura que se utilizan actualmente, las bocinas. Dicha herramienta es una interfaz gráfica que tiene como entradas las variables elementales de diseño de la apertura como por ejemplo: dimensiones de la propia bocina o los parámetros generales comunes a todas ellas. A su vez, el software nos genera algunos de los parámetros de salida fundamentales de las antenas: Directividad, Ancho de haz, Centro de fase y Spillover. Para el correcto desarrollo del software se ha realizado numerosas pruebas con el fin de depurar y corregir errores con respecto a la anterior versión del SABOR. Por otra parte se ha hecho también hincapié en la funcionalidad del programa para que sea más intuitivo y evitar complejidades. El tipo de antena que se pretende estudiar es la bocina que consiste en una guía de onda en la cual el área de la sección se va incrementando progresivamente hasta un extremo abierto, que se comporta como una apertura. Se utilizan extensamente en satélites comerciales para coberturas globales desde órbitas geoestacionarias, pero el uso más común es como elemento de radiación para reflectores de antenas. Los tipos de bocinas que se van a examinar en la herramienta son: Sectorial H, Sectorial E, Piramidal, Cónica, Cónica Corrugada y Piramidal Corrugada. El proyecto está desarrollado de manera que pueda servir de información teórico-práctico de todo el software SABOR. Por ello, el documento además de revisar la teoría de las bocinas analizadas, mostrará la información relacionada con la programación orientado a objetos en entorno MATLAB cuyo objetivo propio es adquirir una nueva forma de pensamiento acerca del proceso de descomposición de problemas y desarrollo de soluciones de programación. Finalmente se ha creado un manual de autoayuda para dar soporte al software y se han incluido los resultados de diversas pruebas realizadas para poder observar todos los detalles de su funcionamiento, así como las conclusiones y líneas futuras de acción. ABSTRACT This Project comes from a program used in the labs of the subject Antennas and Electromagnetic Compatibility in the sixth semester called SABOR, which aims to be updated in order to any type of computer running a Windows operating systems(Windows 7 and subsequent versions). The main objectives are design and improve existing functionalities and develop new features. In addition, we will correct mistakes in earlier versions. For a better understanding a new custom tool using MATLAB environment has been created to analyze one of the most common types of apertura antenna which is used for the moment, horns. This tool is a graphical interface that has elementary design variables as a inputs, for example: Dimensions of the own horn or common general parameters of all horns. At the same time, the software generate us some of the fundamental parameters of antennas output like Directivity, Beamwidth, Phase centre and Spillover. This software has been performed numerous tests for the proper functioning of the Software and we have been cared in order to debug and correct errors that were detected in earlier versions of SABOR. In addition, it has also been emphasized the program's functionality in order to be more intuitive and avoiding unnecessary barriers or complexities. The type of antenna that we are going to study is the horn which consists of a waveguides which the section area has been gradually increasing to an open-ended, that behaves as an aperture. It is widely used in comercial satellites for global coverage from geostationary orbits. However, the most common use is radiating element for antenna reflectors. The types of horns which is going to be considered are: Rectangular H-plane sectorial, Rectangular E-plane sectorial, Rectangular Pyramidal, Circular, Corrugated Circular and Corrugated Pyramidal. The Project is developed so that it can be used as practical-theorical information around the SABOR software. Therefore, In addition to thoroughly reviewing the theory document of analyzed horns, it display information related to the object-oriented programming in MATLAB environment whose goal leads us to a new way of thinking about the process of decomposition of problems and solutions development programming. Finally, it has been created a self-help manual in order to support the software and has been included the results of different tests to observe all the details of their operations, as well as the conclusions and future action lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on an outburst of the high mass X-ray binary 4U 0115+634 with a pulse period of 3.6 s in 2008 March/April as observed with RXTE and INTEGRAL. During the outburst the neutron star’s luminosity varied by a factor of 10 in the 3–50 keV band. In agreement with earlier work we find evidence of five cyclotron resonance scattering features at ~10.7, 21.8, 35.5, 46.7, and 59.7 keV. Previous work had found an anticorrelation between the fundamental cyclotron line energy and the X-ray flux. We show that this apparent anticorrelation is probably due to the unphysical interplay of parameters of the cyclotron line with the continuum models used previously, e.g., the negative and positive exponent power law (NPEX). For this model, we show that cyclotron line modeling erroneously leads to describing part of the exponential cutoff and the continuum variability, and not the cyclotron lines. When the X-ray continuum is modeled with a simple exponentially cutoff power law modified by a Gaussian emission feature around 10 keV, the correlation between the line energy and the flux vanishes, and the line parameters remain virtually constant over the outburst. We therefore conclude that the previously reported anticorrelation is an artifact of the assumptions adopted in the modeling of the continuum.