945 resultados para temporal compressive sensing ratio design
Resumo:
In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.
Compressed Sensing Single-Breath-Hold CMR for Fast Quantification of LV Function, Volumes, and Mass.
Resumo:
OBJECTIVES: The purpose of this study was to compare a novel compressed sensing (CS)-based single-breath-hold multislice magnetic resonance cine technique with the standard multi-breath-hold technique for the assessment of left ventricular (LV) volumes and function. BACKGROUND: Cardiac magnetic resonance is generally accepted as the gold standard for LV volume and function assessment. LV function is 1 of the most important cardiac parameters for diagnosis and the monitoring of treatment effects. Recently, CS techniques have emerged as a means to accelerate data acquisition. METHODS: The prototype CS cine sequence acquires 3 long-axis and 4 short-axis cine loops in 1 single breath-hold (temporal/spatial resolution: 30 ms/1.5 × 1.5 mm(2); acceleration factor 11.0) to measure left ventricular ejection fraction (LVEFCS) as well as LV volumes and LV mass using LV model-based 4D software. For comparison, a conventional stack of multi-breath-hold cine images was acquired (temporal/spatial resolution 40 ms/1.2 × 1.6 mm(2)). As a reference for the left ventricular stroke volume (LVSV), aortic flow was measured by phase-contrast acquisition. RESULTS: In 94% of the 33 participants (12 volunteers: mean age 33 ± 7 years; 21 patients: mean age 63 ± 13 years with different LV pathologies), the image quality of the CS acquisitions was excellent. LVEFCS and LVEFstandard were similar (48.5 ± 15.9% vs. 49.8 ± 15.8%; p = 0.11; r = 0.96; slope 0.97; p < 0.00001). Agreement of LVSVCS with aortic flow was superior to that of LVSVstandard (overestimation vs. aortic flow: 5.6 ± 6.5 ml vs. 16.2 ± 11.7 ml, respectively; p = 0.012) with less variability (r = 0.91; p < 0.00001 for the CS technique vs. r = 0.71; p < 0.01 for the standard technique). The intraobserver and interobserver agreement for all CS parameters was good (slopes 0.93 to 1.06; r = 0.90 to 0.99). CONCLUSIONS: The results demonstrated the feasibility of applying the CS strategy to evaluate LV function and volumes with high accuracy in patients. The single-breath-hold CS strategy has the potential to replace the multi-breath-hold standard cardiac magnetic resonance technique.
Resumo:
Background: Most mortality atlases show static maps from count data aggregated over time. This procedure has several methodological problems and serious limitations for decision making in Public Health. The evaluation of health outcomes, including mortality, should be approached from a dynamic time perspective that is specific for each gender and age group. At the moment, researches in Spain do not provide a dynamic image of the population’s mortality status from a spatio-temporal point of view. The aim of this paper is to describe the spatial distribution of mortality from all causes in small areas of Andalusia (Southern Spain) and evolution over time from 1981 to 2006. Methods: A small-area ecological study was devised using the municipality as the unit for analysis. Two spatiotemporal hierarchical Bayesian models were estimated for each age group and gender. One of these was used to estimate the specific mortality rate, together with its time trends, and the other to estimate the specific rate ratio for each municipality compared with Spain as a whole. Results: More than 97% of the municipalities showed a diminishing or flat mortality trend in all gender and age groups. In 2006, over 95% of municipalities showed male and female mortality specific rates similar or significantly lower than Spanish rates for all age groups below 65. Systematically, municipalities in Western Andalusia showed significant male and female mortality excess from 1981 to 2006 only in age groups over 65. Conclusions: The study shows a dynamic geographical distribution of mortality, with a different pattern for each year, gender and age group. This information will contribute towards a reflection on the past, present and future of mortality in Andalusia.
Resumo:
IMPORTANCE: Depression and obesity are 2 prevalent disorders that have been repeatedly shown to be associated. However, the mechanisms and temporal sequence underlying this association are poorly understood. OBJECTIVE: To determine whether the subtypes of major depressive disorder (MDD; melancholic, atypical, combined, or unspecified) are predictive of adiposity in terms of the incidence of obesity and changes in body mass index (calculated as weight in kilograms divided by height in meters squared), waist circumference, and fat mass. DESIGN, SETTING, AND PARTICIPANTS: This prospective population-based cohort study, CoLaus (Cohorte Lausannoise)/PsyCoLaus (Psychiatric arm of the CoLaus Study), with 5.5 years of follow-up included 3054 randomly selected residents (mean age, 49.7 years; 53.1% were women) of the city of Lausanne, Switzerland (according to the civil register), aged 35 to 66 years in 2003, who accepted the physical and psychiatric baseline and physical follow-up evaluations. EXPOSURES: Depression subtypes according to the DSM-IV. Diagnostic criteria at baseline and follow-up, as well as sociodemographic characteristics, lifestyle (alcohol and tobacco use and physical activity), and medication, were elicited using the semistructured Diagnostic Interview for Genetic Studies. MAIN OUTCOMES AND MEASURES: Changes in body mass index, waist circumference, and fat mass during the follow-up period, in percentage of the baseline value, and the incidence of obesity during the follow-up period among nonobese participants at baseline. Weight, height, waist circumference, and body fat (bioimpedance) were measured at baseline and follow-up by trained field interviewers. RESULTS: Only participants with the atypical subtype of MDD at baseline revealed a higher increase in adiposity during follow-up than participants without MDD. The associations between this MDD subtype and body mass index (β = 3.19; 95% CI, 1.50-4.88), incidence of obesity (odds ratio, 3.75; 95% CI, 1.24-11.35), waist circumference in both sexes (β = 2.44; 95% CI, 0.21-4.66), and fat mass in men (β = 16.36; 95% CI, 4.81-27.92) remained significant after adjustments for a wide range of possible cofounding. CONCLUSIONS AND RELEVANCE: The atypical subtype of MDD is a strong predictor of obesity. This emphasizes the need to identify individuals with this subtype of MDD in both clinical and research settings. Therapeutic measures to diminish the consequences of increased appetite during depressive episodes with atypical features are advocated.
Resumo:
BACKGROUND: To improve the efficacy of first-line therapy for advanced non-small cell lung cancer (NSCLC), additional maintenance chemotherapy may be given after initial induction chemotherapy in patients who did not progress during the initial treatment, rather than waiting for disease progression to administer second-line treatment. Maintenance therapy may consist of an agent that either was or was not present in the induction regimen. The antifolate pemetrexed is efficacious in combination with cisplatin for first-line treatment of advanced NSCLC and has shown efficacy as a maintenance agent in studies in which it was not included in the induction regimen. We designed a phase III study to determine if pemetrexed maintenance therapy improves progression-free survival (PFS) and overall survival (OS) after cisplatin/pemetrexed induction therapy in patients with advanced nonsquamous NSCLC. Furthermore, since evidence suggests expression levels of thymidylate synthase, the primary target of pemetrexed, may be associated with responsiveness to pemetrexed, translational research will address whether thymidylate synthase expression correlates with efficacy outcomes of pemetrexed. METHODS/DESIGN: Approximately 900 patients will receive four cycles of induction chemotherapy consisting of pemetrexed (500 mg/m2) and cisplatin (75 mg/m2) on day 1 of a 21-day cycle. Patients with an Eastern Cooperative Oncology Group performance status of 0 or 1 who have not progressed during induction therapy will randomly receive (in a 2:1 ratio) one of two double-blind maintenance regimens: pemetrexed (500 mg/m2 on day 1 of a 21-day cycle) plus best supportive care (BSC) or placebo plus BSC. The primary objective is to compare PFS between treatment arms. Secondary objectives include a fully powered analysis of OS, objective tumor response rate, patient-reported outcomes, resource utilization, and toxicity. Tumor specimens for translational research will be obtained from consenting patients before induction treatment, with a second biopsy performed in eligible patients following the induction phase. DISCUSSION: Although using a drug as maintenance therapy that was not used in the induction regimen exposes patients to an agent with a different mechanism of action, evidence suggests that continued use of an agent present in the induction regimen as maintenance therapy enables the identification of patients most likely to benefit from maintenance treatment.
Resumo:
BACKGROUND: Few studies describe recent changes in the incidence, treatment, and outcomes of cardiogenic shock. OBJECTIVE: To examine temporal trends in the incidence, therapeutic management, and mortality rates of patients with the acute coronary syndrome (ACS) and cardiogenic shock, and to assess associations of therapeutic management with death and cardiogenic shock developing during hospitalization. DESIGN: Analysis of registry data collected among patients admitted to hospitals between 1997 and 2006. SETTING: 70 of the 106 acute cardiac care hospitals in Switzerland. PATIENTS: 23 696 adults with ACS enrolled in the AMIS (Acute Myocardial Infarction in Switzerland) Plus Registry. MEASUREMENTS: Cardiogenic shock incidence; treatment, including rates of percutaneous coronary intervention; and in-hospital mortality rates. RESULTS: Rates of overall cardiogenic shock (8.3% of patients with ACS) and cardiogenic shock developing during hospitalization (6.0% of patients with ACS and 71.5% of patients with cardiogenic shock) decreased during the past decade (P < 0.001 for temporal trend), whereas rates of cardiogenic shock on admission remained constant (2.3% of patients with ACS and 28.5% of patients with cardiogenic shock). Rates of percutaneous coronary intervention increased among patients with cardiogenic shock (7.6% to 65.9%; P = 0.010), whereas in-hospital mortality decreased (62.8% to 47.7%; P = 0.010). Percutaneous coronary intervention was independently associated with lower risk for both in-hospital mortality in all patients with ACS (odds ratio, 0.47 [95% CI, 0.30 to 0.73]; P = 0.001) and cardiogenic shock development during hospitalization in patients with ACS but without cardiogenic shock on admission (odds ratio, 0.59 [CI, 0.39 to 0.89]; P = 0.012). LIMITATIONS: There was no central review of cardiogenic shock diagnoses, and follow-up duration was confined to the hospital stay. Unmeasured or inaccurately measured characteristics may have confounded observed associations of treatment with outcomes. CONCLUSION: Over the past decade, rates of cardiogenic shock developing during hospitalization and in-hospital mortality decreased among patients with ACS. Increased percutaneous coronary intervention rates were associated with decreased mortality among patients with cardiogenic shock and with decreased development of cardiogenic shock during hospitalization.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
While adaptive adjustment of sex ratio in the function of colony kin structure and food availability commonly occurs in social Hymenoptera, long-term studies have revealed substantial unexplained between-year variation in sex ratio at the population level. In order to identify factors that contribute to increased between-year variation in population sex ratio, we conducted a comparative analysis across 47 Hymenoptera species differing in their breeding system. We found that between-year variation in population sex ratio steadily increased as one moved from solitary species, to primitively eusocial species, to single-queen eusocial species, to multiple-queen eusocial species. Specifically, between-year variation in population sex ratio was low (6.6% of total possible variation) in solitary species, which is consistent with the view that in solitary species, sex ratio can vary only in response to fluctuations in ecological factors such as food availability. In contrast, we found significantly higher (19.5%) between-year variation in population sex ratio in multiple-queen eusocial species, which supports the view that in these species, sex ratio can also fluctuate in response to temporal changes in social factors such as queen number and queen-worker control over sex ratio, as well as factors influencing caste determination. The simultaneous adjustment of sex ratio in response to temporal fluctuations in ecological and social factors seems to preclude the existence of a single sex ratio optimum. The absence of such an optimum may reflect an additional cost associated with the evolution of complex breeding systems in Hymenoptera societies.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Three-dimensional segmented echo planar imaging (3D-EPI) is a promising approach for high-resolution functional magnetic resonance imaging, as it provides an increased signal-to-noise ratio (SNR) at similar temporal resolution to traditional multislice 2D-EPI readouts. Recently, the 3D-EPI technique has become more frequently used and it is important to better understand its implications for fMRI. In this study, the temporal SNR characteristics of 3D-EPI with varying numbers of segments are studied. It is shown that, in humans, the temporal variance increases with the number of segments used to form the EPI acquisition and that for segmented acquisitions, the maximum available temporal SNR is reduced compared to single shot acquisitions. This reduction with increased segmentation is not found in phantom data and thus likely due to physiological processes. When operating in the thermal noise dominated regime, fMRI experiments with a motor task revealed that the 3D variant outperforms the 2D-EPI in terms of temporal SNR and sensitivity to detect activated brain regions. Thus, the theoretical SNR advantage of a segmented 3D-EPI sequence for fMRI only exists in a low SNR situation. However, other advantages of 3D-EPI, such as the application of parallel imaging techniques in two dimensions and the low specific absorption rate requirements, may encourage the use of the 3D-EPI sequence for fMRI in situations with higher SNR.
Resumo:
Report for the scientific sojourn carried out at the l’ Institute for Computational Molecular Science of the Temple University, United States, from 2010 to 2012. Two-component systems (TCS) are used by pathogenic bacteria to sense the environment within a host and activate mechanisms related to virulence and antimicrobial resistance. A prototypical example is the PhoQ/PhoP system, which is the major regulator of virulence in Salmonella. Hence, PhoQ is an attractive target for the design of new antibiotics against foodborne diseases. Inhibition of the PhoQ-mediated bacterial virulence does not result in growth inhibition, presenting less selective pressure for the generation of antibiotic resistance. Moreover, PhoQ is a histidine kinase (HK) and it is absent in animals. Nevertheless, the design of satisfactory HK inhibitors has been proven to be a challenge. To compete with the intracellular ATP concentrations, the affinity of a HK inhibidor must be in the micromolar-nanomolar range, whereas the current lead compounds have at best millimolar affinities. Moreover, the drug selectivity depends on the conformation of a highly variable loop, referred to as the “ATP-lid, which is difficult to study by X-Ray crystallography due to its flexibility. I have investigated the binding of different HK inhibitors to PhoQ. In particular, all-atom molecular dynamics simulations have been combined with enhanced sampling techniques in order to provide structural and dynamic information of the conformation of the ATP-lid. Transient interactions between these drugs and the ATP-lid have been identified and the free energy of the different binding modes has been estimated. The results obtained pinpoint the importance of protein flexibility in the HK-inhibitor binding, and constitute a first step in developing more potent and selective drugs. The computational resources of the hosting institution as well as the experience of the members of the group in drug binding and free energy methods have been crucial to carry out this work.
Resumo:
In this paper, we examine the design of permit trading programs when the objective is to minimize the cost of achieving an ex ante pollution target, that is, one that is defined in expectation rather than an ex post deterministic value. We consider two potential sources of uncertainty, the presence of either of which can make our model appropriate: incomplete information on abatement costs and uncertain delivery coefficients. In such a setting, we find three distinct features that depart from the well-established results on permit trading: (1) the regulator’s information on firms’ abatement costs can matter; (2) the optimal permit cap is not necessarily equal to the ex ante pollution target; and (3) the optimal trading ratio is not necessarily equal to the delivery coefficient even when it is known with certainty. Intuitively, since the regulator is only required to meet a pollution target on average, she can set the trading ratio and total permit cap such that there will be more pollution when abatement costs are high and less pollution when abatement costs are low. Information on firms’ abatement costs is important in order for the regulator to induce the optimal alignment between pollution level and abatement costs.
Resumo:
TEIXEIRA, José João Lopes. Departamento de Engenharia Agrícola, Centro de Ciências Agrárias da Universidade Federal do Ceará, Agosto de 2011. Hidrossedimentologia e disponibilidade hídrica da bacia hidrográfica da Barragem de Poilão, Cabo Verde. Orientador: José Carlos de Araújo. Examinadores: George Leite Mamede, Pedro Henrique Augusto Medeiros. O Arquipélago de Cabo Verde, situado na costa ocidental africana, sofre influência do deserto de Saara tornando o clima caraterizado por pluviometria muito baixa e distribuída irregularmente no espaço e no tempo. As chuvas são muito concentradas, gerando grandes escoamentos para o mar. O aumento da disponibilidade hídrica requer além da construção e manutenção de infraestrutura de captação e conservação de águas pluviais, uma gestão eficiente destes recursos. Atualmente, constitui um dos eixos estratégicos da política do estado de Cabo Verde, a captação, armazenamento e mobilização de águas superficiais através de construção de barragens. Estudos do comportamento hidrológico e sedimentológico do reservatório e da sua bacia de contribuição constituem premissas básicas para um ótimo dimensionamento, gestão e monitoramento da referida infraestrutura. É neste sentido que o presente estudo objetivou sistematizar informações hidrológicas e sedimentológicas da bacia hidrográfica da Barragem de Poilão (BP) e apresentar proposta operacional de longo prazo. A área de estudo ocupa 28 km² a montante da Bacia Hidrográfica da Ribeira Seca (BHRS) na Ilha de Santiago. A altitude da bacia varia de 99 m, situada na cota da barragem, até 1394 m. Para o estudo, foram utilizados e sistematizados, série pluviométrica de 1973 a 2010, registos de vazão instantânea do período 1984 a 2000 e registos agroclimáticos da área de estudo (1981 a 2004). Para o preenchimento das falhas tanto dos escoamentos como da descarga sólida em suspensão, foi utilizado o método de curva chave. Para estimativa de produção de sedimentos na bacia, aplicou-se a Equação Universal de Perda de Solo (USLE) e a razão de aporte de sedimentos (SDR). O índice de retenção de sedimentos no reservatório foi estimado pelo método de Brune e a distribuição de sedimento pelo método empírico de redução de área descrito por Borland e Miller e, revisado por Lara. Para gerar e simular curvas de vazão versus garantia foi utilizado código computacional VYELAS, desenvolvido por Araújo e baseado na abordagem de Campos. Também foi avaliada a redução da vazão de retirada do período 2006 a 2026, provocado pelo assoreamento do reservatório. Concluiu-se que em média a precipitação anual é de 323 mm, concentrando-se 73% nos meses de agosto e setembro; a bacia de contribuição apresenta como valor um número de curva (CN) de 76, com abstração inicial (Ia) de 26 mm, coeficiente de escoamento de 19% e uma vazão anual afluente de 1,7 hm³(cv= 0,73); a disponibilidade hídrica para uma garantia de 85% é avaliada em 0,548 hm³/ano e não 0,671 hm³/ano como indica o projeto original. Com uma descarga sólida estimada em 22.185 m³/ano conclui-se que até o ano de 2026, a capacidade do reservatório reduz a uma taxa de 1,8 % ao ano, devido ao assoreamento, provocando uma redução de 41% da disponibilidade hídrica inicial. Nessa altura, as perdas por evaporação e sangria serão da ordem de 81% da vazão afluente de entrada no reservatório. Na base desses resultados se apresentou proposta de operação da BP.
Resumo:
Essential hypertension is a multifactorial disorder and is the main risk factor for renal and cardiovascular complications. The research on the genetics of hypertension has been frustrated by the small predictive value of the discovered genetic variants. The HYPERGENES Project investigated associations between genetic variants and essential hypertension pursuing a 2-stage study by recruiting cases and controls from extensively characterized cohorts recruited over many years in different European regions. The discovery phase consisted of 1865 cases and 1750 controls genotyped with 1M Illumina array. Best hits were followed up in a validation panel of 1385 cases and 1246 controls that were genotyped with a custom array of 14 055 markers. We identified a new hypertension susceptibility locus (rs3918226) in the promoter region of the endothelial NO synthase gene (odds ratio: 1.54 [95% CI: 1.37-1.73]; combined P=2.58 · 10(-13)). A meta-analysis, using other in silico/de novo genotyping data for a total of 21 714 subjects, resulted in an overall odds ratio of 1.34 (95% CI: 1.25-1.44; P=1.032 · 10(-14)). The quantitative analysis on a population-based sample revealed an effect size of 1.91 (95% CI: 0.16-3.66) for systolic and 1.40 (95% CI: 0.25-2.55) for diastolic blood pressure. We identified in silico a potential binding site for ETS transcription factors directly next to rs3918226, suggesting a potential modulation of endothelial NO synthase expression. Biological evidence links endothelial NO synthase with hypertension, because it is a critical mediator of cardiovascular homeostasis and blood pressure control via vascular tone regulation. This finding supports the hypothesis that there may be a causal genetic variation at this locus.
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.