963 resultados para continuous-time models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current procedures for flood risk estimation assume flood distributions are stationary over time, meaning annual maximum flood (AMF) series are not affected by climatic variation, land use/land cover (LULC) change, or management practices. Thus, changes in LULC and climate are generally not accounted for in policy and design related to flood risk/control, and historical flood events are deemed representative of future flood risk. These assumptions need to be re-evaluated, however, as climate change and anthropogenic activities have been observed to have large impacts on flood risk in many areas. In particular, understanding the effects of LULC change is essential to the study and understanding of global environmental change and the consequent hydrologic responses. The research presented herein provides possible causation for observed nonstationarity in AMF series with respect to changes in LULC, as well as a means to assess the degree to which future LULC change will impact flood risk. Four watersheds in the Midwest, Northeastern, and Central United States were studied to determine flood risk associated with historical and future projected LULC change. Historical single framed aerial images dating back to the mid-1950s were used along with Geographic Information Systems (GIS) and remote sensing models (SPRING and ERDAS) to create historical land use maps. The Forecasting Scenarios of Future Land Use Change (FORE-SCE) model was applied to generate future LULC maps annually from 2006 to 2100 for the conterminous U.S. based on the four IPCC-SRES future emission scenario conditions. These land use maps were input into previously calibrated Soil and Water Assessment Tool (SWAT) models for two case study watersheds. In order to isolate effects of LULC change, the only variable parameter was the Runoff Curve Number associated with the land use layer. All simulations were run with daily climate data from 1978-1999, consistent with the 'base' model which employed the 1992 NLCD to represent 'current' conditions. Output daily maximum flows were converted to instantaneous AMF series and were subsequently modeled using a Log-Pearson Type 3 (LP3) distribution to evaluate flood risk. Analysis of the progression of LULC change over the historic period and associated SWAT outputs revealed that AMF magnitudes tend to increase over time in response to increasing degrees of urbanization. This is consistent with positive trends in the AMF series identified in previous studies, although there are difficulties identifying correlations between LULC change and identified change points due to large time gaps in the generated historical LULC maps, mainly caused by unavailability of sufficient quality historic aerial imagery. Similarly, increases in the mean and median AMF magnitude were observed in response to future LULC change projections, with the tails of the distributions remaining reasonably constant. FORE-SCE scenario A2 was found to have the most dramatic impact on AMF series, consistent with more extreme projections of population growth, demands for growing energy sources, agricultural land, and urban expansion, while AMF outputs based on scenario B2 showed little changes for the future as the focus is on environmental conservation and regional solutions to environmental issues.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The complexity of adapting software during runtime has spawned interest in how models can be used to validate, monitor and adapt runtime behaviour. The use of models during runtime extends the use of modeling techniques beyond the design and implementation phases. The goal of this workshop is to look at issues related to developing appropriate modeldriven approaches to managing and monitoring the execution of systems and, also, to allow the system to reason about itself. We aim to continue the discussion of research ideas and proposals from researchers who work in relevant areas such as MDE, software architectures, reflection, and autonomic and self-adaptive systems, and provide a 'state-of-the-art' research assessment expressed in terms of challenges and achievements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The growing interest for constellation of small, less expensive satellites is bringing space junk and traffic management to the attention of space community. At the same time, the continuous quest for more efficient propulsion systems put the spotlight on electric (low thrust) propulsion as an appealing solution for collision avoidance. Starting with an overview of the current techniques for conjunction assessment and avoidance, we then highlight the possible problems when a low thrust propulsion is used. The need for accurate propagation model shows up from the conducted simulations. Thus, aiming at propagation models with low computational burden, we study the available models from the literature and propose an analytical alternative to improve propagation accuracy. The model is then tested in the particular case of a tangential maneuver. Results show that the proposed solution significantly improve on state of the art methods and is a good candidate to be used in collision avoidance operations. For instance to propagate satellite uncertainty or optimizing avoidance maneuver when conjunction occurs within few (3-4) orbits from measurements time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The South Atlantic Magnetic Anomaly (SAMA) is one of the most outstanding anomalies of the geomagnetic field. The SAMA secular variation was obtained and compared to the evolution of other anomalies using spherical harmonic field models for the 1590-2005 period. An analysis of data from four South American observatories shows how this large scale anomaly affected their measurements. Since SAMA is a low total field anomaly, the field was separated into its nondipolar, quadrupolar and octupolar parts. The time evolution of the non-dipole/total, quadrupolar/total and octupolar/total field ratios yielded increasingly high values for the South Atlantic since 1750. The SAMA evolution is compared to the evolution of other large scale surface geomagnetic features like the North and the South Pole and the Siberia High, and this comparison shows the intensity equilibrium between these anomalies in both hemispheres. The analysis of non-dipole fields in historical period suggests that SAMA is governed by (i) quadrupolar field for drift, and (ii) quadrupolar and octupolar fields for intensity and area of influence. Furthermore, our study reinforces the possibility that SAMA may be related to reverse fluxes in the outer core under the South Atlantic region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attention deficit, impulsivity and hyperactivity are the cardinal features of attention deficit hyperactivity disorder (ADHD) but executive function (EF) disorders, as problems with inhibitory control, working memory and reaction time, besides others EFs, may underlie many of the disturbs associated with the disorder. OBJECTIVE: To examine the reaction time in a computerized test in children with ADHD and normal controls. METHOD: Twenty-three boys (aged 9 to 12) with ADHD diagnosis according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, 2000 (DSM-IV) criteria clinical, without comorbidities, Intelligence Quotient (IQ) >89, never treated with stimulant and fifteen normal controls, age matched were investigated during performance on a voluntary attention psychophysical test. RESULTS: Children with ADHD showed reaction time higher than normal controls. CONCLUSION: A slower reaction time occurred in our patients with ADHD. This findings may be related to problems with the attentional system, that could not maintain an adequate capacity of perceptual input processes and/or in motor output processes, to respond consistently during continuous or repetitive activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SEVERAL MODELS OF TIME ESTIMATION HAVE BEEN developed in psychology; a few have been applied to music. In the present study, we assess the influence of the distances travelled through pitch space on retrospective time estimation. Participants listened to an isochronous chord sequence of 20-s duration. They were unexpectedly asked to reproduce the time interval of the sequence. The harmonic structure of the stimulus was manipulated so that the sequence either remained in the same key (CC) or travelled through a closely related key (CFC) or distant key (CGbC). Estimated times were shortened when the sequence modulated to a very distant key. This finding is discussed in light of Lerdahl's Tonal Pitch Space Theory (2001), Firmino and Bueno's Expected Development Fraction Model (in press), and models of time estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a nontrivial one-species population dynamics model with finite and infinite carrying capacities. Time-dependent intrinsic and extrinsic growth rates are considered in these models. Through the model per capita growth rate we obtain a heuristic general procedure to generate scaling functions to collapse data into a simple linear behavior even if an extrinsic growth rate is included. With this data collapse, all the models studied become independent from the parameters and initial condition. Analytical solutions are found when time-dependent coefficients are considered. These solutions allow us to perceive nontrivial transitions between species extinction and survival and to calculate the transition's critical exponents. Considering an extrinsic growth rate as a cancer treatment, we show that the relevant quantity depends not only on the intensity of the treatment, but also on when the cancerous cell growth is maximum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: MicroRNAs (miRNAs) are short non-coding RNAs that inhibit translation of target genes by binding to their mRNAs. The expression of numerous brain-specific miRNAs with a high degree of temporal and spatial specificity suggests that miRNAs play an important role in gene regulation in health and disease. Here we investigate the time course gene expression profile of miR-1, -16, and -206 in mouse dorsal root ganglion (DRG), and spinal cord dorsal horn under inflammatory and neuropathic pain conditions as well as following acute noxious stimulation. Results: Quantitative real-time polymerase chain reaction analyses showed that the mature form of miR-1, -16 and -206, is expressed in DRG and the dorsal horn of the spinal cord. Moreover, CFA-induced inflammation significantly reduced miRs-1 and -16 expression in DRG whereas miR-206 was downregulated in a time dependent manner. Conversely, in the spinal dorsal horn all three miRNAs monitored were upregulated. After sciatic nerve partial ligation, miR-1 and -206 were downregulated in DRG with no change in the spinal dorsal horn. On the other hand, axotomy increases the relative expression of miR-1, -16, and 206 in a time-dependent fashion while in the dorsal horn there was a significant downregulation of miR-1. Acute noxious stimulation with capsaicin also increased the expression of miR-1 and -16 in DRG cells but, on the other hand, in the spinal dorsal horn only a high dose of capsaicin was able to downregulate miR-206 expression. Conclusions: Our results indicate that miRNAs may participate in the regulatory mechanisms of genes associated with the pathophysiology of chronic pain as well as the nociceptive processing following acute noxious stimulation. We found substantial evidence that miRNAs are differentially regulated in DRG and the dorsal horn of the spinal cord under different pain states. Therefore, miRNA expression in the nociceptive system shows not only temporal and spatial specificity but is also stimulus-dependent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survival or longevity is an economically important trait in beef cattle. The main inconvenience for its inclusion in selection criteria is delayed recording of phenotypic data and the high computational demand for including survival in proportional hazard models. Thus, identification of a longevity-correlated trait that could be recorded early in life would be very useful for selection purposes. We estimated the genetic relationship of survival with productive and reproductive traits in Nellore cattle, including weaning weight (WW), post-weaning growth (PWG), muscularity (MUSC), scrotal circumference at 18 months (SC18), and heifer pregnancy (HP). Survival was measured in discrete time intervals and modeled through a sequential threshold model. Five independent bivariate Bayesian analyses were performed, accounting for cow survival and the five productive and reproductive traits. Posterior mean estimates for heritability (standard deviation in parentheses) were 0.55 (0.01) for WW, 0.25 (0.01) for PWG, 0.23 (0.01) for MUSC, and 0.48 (0.01) for SC18. The posterior mean estimates (95% confidence interval in parentheses) for the genetic correlation with survival were 0.16 (0.13-0.19), 0.30 (0.25-0.34), 0.31 (0.25-0.36), 0.07 (0.02-0.12), and 0.82 (0.78-0.86) for WW, PWG, MUSC, SC18, and HP, respectively. Based on the high genetic correlation and heritability (0.54) posterior mean estimates for HP, the expected progeny difference for HP can be used to select bulls for longevity, as well as for post-weaning gain and muscle score.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mass function of cluster-size halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat Lambda CDM model. The comparison with Lambda CDM is performed by a two-step process. First, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using supernovae type Ia, the cosmic microwave background shift parameter, and baryon acoustic oscillations. Apart from a braneworld inspired cosmology, it is found that the derived Hubble relation of the remaining models reproduces the Lambda CDM results approximately with the same degree of statistical confidence. Second, in order to attempt to distinguish the different dark energy models from the expectations of Lambda CDM, we analyze the predicted cluster-size halo redshift distribution on the basis of two future cluster surveys: (i) an X-ray survey based on the eROSITA satellite, and (ii) a Sunayev-Zeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the Lambda CDM cosmology, while the predictions of 4 models are statistically equivalent to those of the Lambda CDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggests that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.