1000 resultados para Space variability
Resumo:
We report on the results of the spectral and timing analysis of a BeppoSAX observation of the microquasar system LS 5039/RX J1826.2-1450. The source was found in a low-flux state with Fx(1-10 keV)= 4.7 x 10^{-12} erg cm^{-2} s^{-1}, which represents almost one order of magnitude lower than a previous RXTE observation 2.5 years before. The 0.1--10 keV spectrum is described by an absorbed power-law continuum with photon-number spectral index Gamma=1.8+-0.2 and hydrogen column density of NH=1.0^{+0.4}_{-0.3} x 10^{22} cm^{-2}. According to the orbital parameters of the system the BeppoSAX observation covers the time of an X-ray eclipse should one occur. However, the 1.6-10 keV light curve does not show evidence for such an event, which allows us to give an upper limit to the inclination of the system. The low X-ray flux detected during this observation is interpreted as a decrease in the mass accretion rate onto the compact object due to a decrease in the mass-loss rate from the primary.
Resumo:
Wide-range spectral coverage of blazar-type active galactic nuclei is of paramount importance for understanding the particle acceleration mechanisms assumed to take place in their jets. The Major Atmospheric Gamma Imaging Cerenkov (MAGIC) telescope participated in three multiwavelength (MWL) campaigns, observing the blazar Markarian (Mkn) 421 during the nights of April 28 and 29, 2006, and June 14, 2006. Aims. We analyzed the corresponding MAGIC very-high energy observations during 9 nights from April 22 to 30, 2006 and on June 14, 2006. We inferred light curves with sub-day resolution and night-by-night energy spectra. Methods. MAGIC detects γ-rays by observing extended air showers in the atmosphere. The obtained air-shower images were analyzed using the standard MAGIC analysis chain. Results. A strong γ-ray signal was detected from Mkn 421 on all observation nights. The flux (E > 250 GeV) varied on night-by-night basis between (0.92±0.11) × 10-10 cm-2 s-1 (0.57 Crab units) and (3.21±0.15) × 10-10 cm-2 s-1 (2.0 Crab units) in April 2006. There is a clear indication for intra-night variability with a doubling time of 36± min on the night of April 29, 2006, establishing once more rapid flux variability for this object. For all individual nights γ-ray spectra could be inferred, with power-law indices ranging from 1.66 to 2.47. We did not find statistically significant correlations between the spectral index and the flux state for individual nights. During the June 2006 campaign, a flux substantially lower than the one measured by the Whipple 10-m telescope four days later was found. Using a log-parabolic power law fit we deduced for some data sets the location of the spectral peak in the very-high energy regime. Our results confirm the indications of rising peak energy with increasing flux, as expected in leptonic acceleration models.
Resumo:
The dual-stream model of auditory processing postulates separate processing streams for sound meaning and for sound location. The present review draws on evidence from human behavioral and activation studies as well as from lesion studies to argue for a position-linked representation of sound objects that is distinct both from the position-independent representation within the ventral/What stream and from the explicit sound localization processing within the dorsal/Where stream.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
OBJECTIVE: Benzodiazepines (BZD) are recommended as first-line treatment for status epilepticus (SE), with lorazepam (LZP) and midazolam (MDZ) being the most widely used drugs and part of current treatment guidelines. Clonazepam (CLZ) is also utilized in many countries; however, there is no systematic comparison of these agents for treatment of SE to date. METHODS: We identified all patients treated with CLZ, LZP, or MDZ as a first-line agent from a prospectively collected observational cohort of adult patients treated for SE in four tertiary care centers. Relative efficacies of CLZ, LZP, and MDZ were compared by assessing the risk of developing refractory SE and the number of antiseizure drugs (ASDs) required to control SE. RESULTS: Among 177 patients, 72 patients (40.62%) received CLZ, 82 patients (46.33%) LZP, and 23 (12.99%) MDZ; groups were similar in demographics and SE characteristics. Loading dose was considered insufficient in the majority of cases for LZP, with a similar rate (84%, 95%, and 87.5%) in the centers involved, and CLZ was used as recommended in 52% of patients. After adjustment for relevant variables, LZP was associated with an increased risk of refractoriness as compared to CLZ (odds ratio [OR] 6.4, 95% confidence interval [CI] 2.66-15.5) and with an increased number of ASDs needed for SE control (OR 4.35, 95% CI 1.8-10.49). SIGNIFICANCE: CLZ seems to be an effective alternative to LZP and MDZ. LZP is frequently underdosed in this setting. These findings are highly relevant, since they may impact daily practice.
Resumo:
The impact of transnational private regulation on labour standards remains in dispute. While studies have provided some limited evidence of positive effects on 'outcome standards' such as wages or occupational health and safety, the literature gives little reason to believe that there has been any significant effect on 'process rights' relating primarily to collective workers' voice and social dialogue. This paper probes this assumption by bringing local contexts and worker agency more fully into the picture. It outlines an analytical framework that emphasizes workers' potential to act collectively for change in the regulatory space surrounding the employment relationship. It argues that while transnational private regulation on labour standards may marginally improve workers access to regulatory spaces and their capacity to require the inclusion of enterprises in them, it does little to increase union leverage. The findings are based on empirical research work conducted in Sub-Saharan Africa.
Influence of M. tuberculosis lineage variability within a clinical trial for pulmonary tuberculosis.
Resumo:
Recent studies suggest that M. tuberculosis lineage and host genetics interact to impact how active tuberculosis presents clinically. We determined the phylogenetic lineages of M. tuberculosis isolates from participants enrolled in the Tuberculosis Trials Consortium Study 28, conducted in Brazil, Canada, South Africa, Spain, Uganda and the United States, and secondarily explored the relationship between lineage, clinical presentation and response to treatment. Large sequence polymorphisms and single nucleotide polymorphisms were analyzed to determine lineage and sublineage of isolates. Of 306 isolates genotyped, 246 (80.4%) belonged to the Euro-American lineage, with sublineage 724 predominating at African sites (99/192, 51.5%), and the Euro-American strains other than 724 predominating at non-African sites (89/114, 78.1%). Uneven distribution of lineages across regions limited our ability to discern significant associations, nonetheless, in univariate analyses, Euro-American sublineage 724 was associated with more severe disease at baseline, and along with the East Asian lineage was associated with lower bacteriologic conversion after 8 weeks of treatment. Disease presentation and response to drug treatment varied by lineage, but these associations were no longer statistically significant after adjustment for other variables associated with week-8 culture status.
Resumo:
Organochlorine compounds (OC) are known to induce vitamin A (retinoids) deficiency in mammals, which may be associated with impairment of immunocompetence, reproduction and growth. This makes retinoids a potentially useful biomarker of organochlorine impact on marine mammals. However, use of retinoids as a biomarker requires knowledge about its intrapopulation patterns of variation in natural conditions, information which is not currently available. We investigated these patterns in a cetacean population living in an unpolluted environment. 100 harbour porpoises Phocoena phocoena from West Greenland were sampled during the 1995 hunting season. Sex, age, morphometrics, nutritive condition, and retinol (following saponification) and OC levels in blubber were determined for each individual. OC levels found were extremely low and therefore considered unlikely to affect the population adversely: mean blubber concentrations, expressed on an extractable basis, were 2.04 (SD = 1.1) ppm for PCBs and 2.76 (SD = 1.66) ppm for tDDT. The mean blubber retinol concentration for the overall population was 59.66 (SD = 45.26) mu g g(-1). Taking into account the high contribution of blubber to body mass, blubber constitutes a significant body site for retinoid deposition in harbour porpoises. Retinol concentrations did not differ significantly between geographical regions or sexes, but they did correlate significantly (p <0.001) with age. Body condition, measured by determining the lipid content of the blubber, did not have a significant effect on retinol levels but the individuals examined were considered to be in an overall good nutritive condition. It is concluded that measurement of retinol concentrations in blubber samples is feasible and has a potential for use as a biomarker of organochlorine exposure in cetaceans. However, in order to do so, biological information, particularly age, is critical for the correct assessment of physiological impact
Resumo:
OBJECTIVE: To identify and quantify sources of variability in scores on the speech, spatial, and qualities of hearing scale (SSQ) and its short forms among normal-hearing and hearing-impaired subjects using a French-language version of the SSQ. DESIGN: Multi-regression analyses of SSQ scores were performed using age, gender, years of education, hearing loss, and hearing-loss asymmetry as predictors. Similar analyses were performed for each subscale (Speech, Spatial, and Qualities), for several SSQ short forms, and for differences in subscale scores. STUDY SAMPLE: One hundred normal-hearing subjects (NHS) and 230 hearing-impaired subjects (HIS). RESULTS: Hearing loss in the better ear and hearing-loss asymmetry were the two main predictors of scores on the overall SSQ, the three main subscales, and the SSQ short forms. The greatest difference between the NHS and HIS was observed for the Speech subscale, and the NHS showed scores well below the maximum of 10. An age effect was observed mostly on the Speech subscale items, and the number of years of education had a significant influence on several Spatial and Qualities subscale items. CONCLUSION: Strong similarities between SSQ scores obtained across different populations and languages, and between SSQ and short forms, underline their potential international use.
Resumo:
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Resumo:
This study investigated changes in heart rate variability (HRV) in elite Nordic-skiers to characterize different types of "fatigue" in 27 men and 30 women surveyed from 2004 to 2008. R-R intervals were recorded at rest during 8 min supine (SU) followed by 7 min standing (ST). HRV parameters analysed were powers of low (LF), high (HF) frequencies, (LF+HF) (ms(2)) and heart rate (HR, bpm). In the 1 063 HRV tests performed, 172 corresponded to a "fatigue" state and the first were considered for analysis. 4 types of "fatigue" (F) were identified: 1. F(HF(-)LF(-))SU_ST for 42 tests: decrease in LFSU (- 46%), HFSU (- 70%), LFST (- 43%), HFST (- 53%) and increase in HRSU (+ 15%), HRST (+ 14%). 2. F(LF(+) SULF(-) ST) for 8 tests: increase in LFSU (+ 190%) decrease in LFST (- 84%) and increase in HRST (+ 21%). 3. F(HF(-) SUHF(+) ST) for 6 tests: decrease in HFSU (- 72%) and increase in HFST (+ 501%). 4. F(HF(+) SU) for only 1 test with an increase in HFSU (+ 2161%) and decrease in HRSU (- 15%). Supine and standing HRV patterns were independently modified by "fatigue". 4 "fatigue"-shifted HRV patterns were statistically sorted according to differently paired changes in the 2 postures. This characterization might be useful for further understanding autonomic rearrangements in different "fatigue" conditions.