927 resultados para Arrow Of Time
Resumo:
Existing work in the context of energy management for real-time systems often ignores the substantial cost of making DVFS and sleep state decisions in terms of time and energy and/or assume very simple models. Within this paper we attempt to explore the parameter space for such decisions and possible constraints faced.
Resumo:
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.
Resumo:
Face à estagnação da tecnologia uniprocessador registada na passada década, aos principais fabricantes de microprocessadores encontraram na tecnologia multi-core a resposta `as crescentes necessidades de processamento do mercado. Durante anos, os desenvolvedores de software viram as suas aplicações acompanhar os ganhos de performance conferidos por cada nova geração de processadores sequenciais, mas `a medida que a capacidade de processamento escala em função do número de processadores, a computação sequencial tem de ser decomposta em várias partes concorrentes que possam executar em paralelo, para que possam utilizar as unidades de processamento adicionais e completar mais rapidamente. A programação paralela implica um paradigma completamente distinto da programação sequencial. Ao contrário dos computadores sequenciais tipificados no modelo de Von Neumann, a heterogeneidade de arquiteturas paralelas requer modelos de programação paralela que abstraiam os programadores dos detalhes da arquitectura e simplifiquem o desenvolvimento de aplicações concorrentes. Os modelos de programação paralela mais populares incitam os programadores a identificar instruções concorrentes na sua lógica de programação, e a especificá-las sob a forma de tarefas que possam ser atribuídas a processadores distintos para executarem em simultâneo. Estas tarefas são tipicamente lançadas durante a execução, e atribuídas aos processadores pelo motor de execução subjacente. Como os requisitos de processamento costumam ser variáveis, e não são conhecidos a priori, o mapeamento de tarefas para processadores tem de ser determinado dinamicamente, em resposta a alterações imprevisíveis dos requisitos de execução. `A medida que o volume da computação cresce, torna-se cada vez menos viável garantir as suas restrições temporais em plataformas uniprocessador. Enquanto os sistemas de tempo real se começam a adaptar ao paradigma de computação paralela, há uma crescente aposta em integrar execuções de tempo real com aplicações interativas no mesmo hardware, num mundo em que a tecnologia se torna cada vez mais pequena, leve, ubíqua, e portável. Esta integração requer soluções de escalonamento que simultaneamente garantam os requisitos temporais das tarefas de tempo real e mantenham um nível aceitável de QoS para as restantes execuções. Para tal, torna-se imperativo que as aplicações de tempo real paralelizem, de forma a minimizar os seus tempos de resposta e maximizar a utilização dos recursos de processamento. Isto introduz uma nova dimensão ao problema do escalonamento, que tem de responder de forma correcta a novos requisitos de execução imprevisíveis e rapidamente conjeturar o mapeamento de tarefas que melhor beneficie os critérios de performance do sistema. A técnica de escalonamento baseado em servidores permite reservar uma fração da capacidade de processamento para a execução de tarefas de tempo real, e assegurar que os efeitos de latência na sua execução não afectam as reservas estipuladas para outras execuções. No caso de tarefas escalonadas pelo tempo de execução máximo, ou tarefas com tempos de execução variáveis, torna-se provável que a largura de banda estipulada não seja consumida por completo. Para melhorar a utilização do sistema, os algoritmos de partilha de largura de banda (capacity-sharing) doam a capacidade não utilizada para a execução de outras tarefas, mantendo as garantias de isolamento entre servidores. Com eficiência comprovada em termos de espaço, tempo, e comunicação, o mecanismo de work-stealing tem vindo a ganhar popularidade como metodologia para o escalonamento de tarefas com paralelismo dinâmico e irregular. O algoritmo p-CSWS combina escalonamento baseado em servidores com capacity-sharing e work-stealing para cobrir as necessidades de escalonamento dos sistemas abertos de tempo real. Enquanto o escalonamento em servidores permite partilhar os recursos de processamento sem interferências a nível dos atrasos, uma nova política de work-stealing que opera sobre o mecanismo de capacity-sharing aplica uma exploração de paralelismo que melhora os tempos de resposta das aplicações e melhora a utilização do sistema. Esta tese propõe uma implementação do algoritmo p-CSWS para o Linux. Em concordância com a estrutura modular do escalonador do Linux, ´e definida uma nova classe de escalonamento que visa avaliar a aplicabilidade da heurística p-CSWS em circunstâncias reais. Ultrapassados os obstáculos intrínsecos `a programação da kernel do Linux, os extensos testes experimentais provam que o p-CSWS ´e mais do que um conceito teórico atrativo, e que a exploração heurística de paralelismo proposta pelo algoritmo beneficia os tempos de resposta das aplicações de tempo real, bem como a performance e eficiência da plataforma multiprocessador.
Resumo:
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.
Resumo:
The value given by commuters to the variability of travel times is empirically analysed using stated preference data from Barcelona (Spain). Respondents are asked to choose between alternatives that differ in terms of cost, average travel time, variability of travel times and departure time. Different specifications of a scheduling choice model are used to measure the influence of various socioeconomic characteristics. Our results show that travel time variability.
Resumo:
VAR methods have been used to model the inter-relationships between inflows and outfl ows into unemployment and vacancies using tools such as impulse response analysis. In order to investigate whether such impulse responses change over the course of the business cycle or or over time, this paper uses TVP-VARs for US and Canadian data. For the US, we find interesting differences between the most recent recession and earlier recessions and expansions. In particular, we find the immediate effect of a negative shock on both in ow and out flow hazards to be larger in 2008 than in earlier times. Furthermore, the effect of this shock takes longer to decay. For Canada, we fi nd less evidence of time-variation in impulse responses.
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
Imaging mass spectrometry (IMS) is useful for visualizing the localization of phospholipids on biological tissue surfaces creating great opportunities for IMS in lipidomic investigations. With advancements in IMS of lipids, there is a demand for large-scale tissue studies necessitating stable, efficient and well-defined sample handling procedures. Our work within this article shows the effects of different storage conditions on the phospholipid composition of sectioned tissues from mouse organs. We have taken serial sections from mouse brain, kidney and liver thaw mounted unto ITO-coated glass slides and stored them under various conditions later analyzing them at fixed time points. A global decrease in phospholipid signal intensity is shown to occur and to be a function of time and temperature. Contrary to the global decrease, oxidized phospholipid and lysophospholipid species are found to increase within 2 h and 24 h, respectively, when mounted sections are kept at ambient room conditions. Imaging experiments reveal that degradation products increase globally across the tissue. Degradation is shown to be inhibited by cold temperatures, with sample integrity maintained up to a week after storage in −80 °C freezer under N2 atmosphere. Overall, the results demonstrate a timeline of the effects of lipid degradation specific to sectioned tissues and provide several lipid species which can serve as markers of degradation. Importantly, the timeline demonstrates oxidative sample degradation begins appearing within the normal timescale of IMS sample preparation of lipids (i.e. 1-2 h) and that long-term degradation is global. Taken together, these results strengthen the notion that standardized procedures are required for phospholipid IMS of large sample sets, or in studies where many serial sections are prepared together but analyzed over time such as in 3-D IMS reconstruction experiments.
Resumo:
Time series regression models are especially suitable in epidemiology for evaluating short-term effects of time-varying exposures on health. The problem is that potential for confounding in time series regression is very high. Thus, it is important that trend and seasonality are properly accounted for. Our paper reviews the statistical models commonly used in time-series regression methods, specially allowing for serial correlation, make them potentially useful for selected epidemiological purposes. In particular, we discuss the use of time-series regression for counts using a wide range Generalised Linear Models as well as Generalised Additive Models. In addition, recently critical points in using statistical software for GAM were stressed, and reanalyses of time series data on air pollution and health were performed in order to update already published. Applications are offered through an example on the relationship between asthma emergency admissions and photochemical air pollutants
Resumo:
This paper tests the internal consistency of time trade-off utilities.We find significant violations of consistency in the direction predictedby loss aversion. The violations disappear for higher gauge durations.We show that loss aversion can also explain that for short gaugedurations time trade-off utilities exceed standard gamble utilities. Ourresults suggest that time trade-off measurements that use relativelyshort gauge durations, like the widely used EuroQol algorithm(Dolan 1997), are affected by loss aversion and lead to utilities thatare too high.
Resumo:
In dealing with systems as complex as the cytoskeleton, we need organizing principles or, short of that, an empirical framework into which these systems fit. We report here unexpected invariants of cytoskeletal behavior that comprise such an empirical framework. We measured elastic and frictional moduli of a variety of cell types over a wide range of time scales and using a variety of biological interventions. In all instances elastic stresses dominated at frequencies below 300 Hz, increased only weakly with frequency, and followed a power law; no characteristic time scale was evident. Frictional stresses paralleled the elastic behavior at frequencies below 10 Hz but approached a Newtonian viscous behavior at higher frequencies. Surprisingly, all data could be collapsed onto master curves, the existence of which implies that elastic and frictional stresses share a common underlying mechanism. Taken together, these findings define an unanticipated integrative framework for studying protein interactions within the complex microenvironment of the cell body, and appear to set limits on what can be predicted about integrated mechanical behavior of the matrix based solely on cytoskeletal constituents considered in isolation. Moreover, these observations are consistent with the hypothesis that the cytoskeleton of the living cell behaves as a soft glassy material, wherein cytoskeletal proteins modulate cell mechanical properties mainly by changing an effective temperature of the cytoskeletal matrix. If so, then the effective temperature becomes an easily quantified determinant of the ability of the cytoskeleton to deform, flow, and reorganize.
Resumo:
Sex differences in circadian rhythms have been reported with some conflicting results. The timing of sleep and length of time in bed have not been considered, however, in previous such studies. The current study has 3 major aims: (1) replicate previous studies in a large sample of young adults for sex differences in sleep patterns and dim light melatonin onset (DLMO) phase; (2) in a subsample constrained by matching across sex for bedtime and time in bed, confirm sex differences in DLMO and phase angle of DLMO to bedtime; (3) explore sex differences in the influence of sleep timing and length of time in bed on phase angle. A total of 356 first-year Brown University students (207 women) aged 17.7 to 21.4 years (mean = 18.8 years, SD = 0.4 years) were included in these analyses. Wake time was the only sleep variable that showed a sex difference. DLMO phase was earlier in women than men and phase angle wider in women than men. Shorter time in bed was associated with wider phase angle in women and men. In men, however, a 3-way interaction indicated that phase angles were influenced by both bedtime and time in bed; a complex interaction was not found for women. These analyses in a large sample of young adults on self-selected schedules confirm a sex difference in wake time, circadian phase, and the association between circadian phase and reported bedtime. A complex interaction with length of time in bed occurred for men but not women. We propose that these sex differences likely indicate fundamental differences in the biology of the sleep and circadian timing systems as well as in behavioral choices.
Resumo:
BACKGROUND: In heart transplantation, antibody-mediated rejection (AMR) is diagnosed and graded on the basis of immunopathologic (C4d-CD68) and histopathologic criteria found on endomyocardial biopsies (EMB). Because some pathologic AMR (pAMR) grades may be associated with clinical AMR, and because humoral responses may be affected by the intensity of immunosuppression during the first posttransplantation year, we investigated the incidence and positive predictive values (PPV) of C4d-CD68 and pAMR grades for clinical AMR as a function of time. METHODS: All 564 EMB from 40 adult heart recipients were graded for pAMR during the first posttransplantation year. Clinical AMR was diagnosed by simultaneous occurrence of pAMR on EMB, donor specific antibodies and allograft dysfunction. RESULTS: One patient demonstrated clinical AMR at postoperative day 7 and one at 6 months (1-year incidence 5%). C4d-CD68 was found on 4,7% EMB with a "decrescendo" pattern over time (7% during the first 4 months vs. 1.2% during the last 8 months; P < 0.05). Histopathologic criteria of AMR occurred on 10.3% EMB with no particular time pattern. Only the infrequent (1.4%) pAMR2 grade (simultaneous histopathologic and immunopathologic markers) was predictive for clinical AMR, particularly after the initial postoperative period (first 4 months and last 8 months PPV = 33%-100%; P < 0.05). CONCLUSION: In the first posttransplantation year, AMR immunopathologic and histopathologic markers were relatively frequent, but only their simultaneous occurrence (pAMR2) was predictive of clinical AMR. Furthermore, posttransplantation time may modulate the occurrence of C4d-CD68 on EMB and thus the incidence of pAMR2 and its relevance to the diagnosis of clinical AMR.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
Peer-reviewed