966 resultados para Time complexity
Resumo:
Wireless Mesh Networks (WMNs) are increasingly deployed to enable thousands of users to share, create, and access live video streaming with different characteristics and content, such as video surveillance and football matches. In this context, there is a need for new mechanisms for assessing the quality level of videos because operators are seeking to control their delivery process and optimize their network resources, while increasing the user’s satisfaction. However, the development of in-service and non-intrusive Quality of Experience assessment schemes for real-time Internet videos with different complexity and motion levels, Group of Picture lengths, and characteristics, remains a significant challenge. To address this issue, this article proposes a non-intrusive parametric real-time video quality estimator, called MultiQoE that correlates wireless networks’ impairments, videos’ characteristics, and users’ perception into a predicted Mean Opinion Score. An instance of MultiQoE was implemented in WMNs and performance evaluation results demonstrate the efficiency and accuracy of MultiQoE in predicting the user’s perception of live video streaming services when compared to subjective, objective, and well-known parametric solutions.
Resumo:
Although several detailed models of molecular processes essential for circadian oscillations have been developed, their complexity makes intuitive understanding of the oscillation mechanism difficult. The goal of the present study was to reduce a previously developed, detailed model to a minimal representation of the transcriptional regulation essential for circadian rhythmicity in Drosophila. The reduced model contains only two differential equations, each with time delays. A negative feedback loop is included, in which PER protein represses per transcription by binding the dCLOCK transcription factor. A positive feedback loop is also included, in which dCLOCK indirectly enhances its own formation. The model simulated circadian oscillations, light entrainment, and a phase-response curve with qualitative similarities to experiment. Time delays were found to be essential for simulation of circadian oscillations with this model. To examine the robustness of the simplified model to fluctuations in molecule numbers, a stochastic variant was constructed. Robust circadian oscillations and entrainment to light pulses were simulated with fewer than 80 molecules of each gene product present on average. Circadian oscillations persisted when the positive feedback loop was removed. Moreover, elimination of positive feedback did not decrease the robustness of oscillations to stochastic fluctuations or to variations in parameter values. Such reduced models can aid understanding of the oscillation mechanisms in Drosophila and in other organisms in which feedback regulation of transcription may play an important role.
Resumo:
Although heterogeneity and time are central aspects of economic activity, it was predominantly the Austrian School of economics that emphasized these two aspects. In this paper we argue that the explicit consideration of heterogeneity and time is of increasing importance due to the increasing environmental and resource problems faced by humankind today. It is shown that neo-Austrian capital theory, which revived Austrian ideas employing a formal approach in the 1970s, is not only well suited to address issues of structural change and of accompanying unemployment induced by technical progress but also can be employed for an encompassing ecological-economic analysis demanded by ecological economics. However, complexity, uncertainty, and real ignorance limit the applicability of formal economic analysis. Therefore, we conclude that economic analysis has to be supplemented by considerations of political philosophy. Copyright 2006 American Journal of Economics and Sociology, Inc..
Resumo:
We present applicative theories of words corresponding to weak, and especially logarithmic, complexity classes. The theories for the logarithmic hierarchy and alternating logarithmic time formalise function algebras with concatenation recursion as main principle. We present two theories for logarithmic space where the first formalises a new two-sorted algebra which is very similar to Cook and Bellantoni's famous two-sorted algebra B for polynomial time [4]. The second theory describes logarithmic space by formalising concatenation- and sharply bounded recursion. All theories contain the predicates WW representing words, and VV representing temporary inaccessible words. They are inspired by Cantini's theories [6] formalising B.
Resumo:
A time-lapse pressure tomography inversion approach is applied to characterize the CO2 plume development in a virtual deep saline aquifer. Deep CO2 injection leads to flow properties of the mixed-phase, which vary depending on the CO2 saturation. Analogous to the crossed ray paths of a seismic tomographic experiment, pressure tomography creates streamline patterns by injecting brine prior to CO2 injection or by injecting small amounts of CO2 into the two-phase (brine and CO2) system at different depths. In a first step, the introduced pressure responses at observation locations are utilized for a computationally rapid and efficient eikonal equation based inversion to reconstruct the heterogeneity of the subsurface with diffusivity (D) tomograms. Information about the plume shape can be derived by comparing D-tomograms of the aquifer at different times. In a second step, the aquifer is subdivided into two zones of constant values of hydraulic conductivity (K) and specific storage (Ss) through a clustering approach. For the CO2 plume, mixed-phase K and Ss values are estimated by minimizing the difference between calculated and “true” pressure responses using a single-phase flow simulator to reduce the computing complexity. Finally, the estimated flow property is converted to gas saturation by a single-phase proxy, which represents an integrated value of the plume. This novel approach is tested first with a doublet well configuration, and it reveals a great potential of pressure tomography based concepts for characterizing and monitoring deep aquifers, as well as the evolution of a CO2 plume. Still, field-testing will be required for better assessing the applicability of this approach.
Resumo:
CONTEXT Complex steroid disorders such as P450 oxidoreductase deficiency or apparent cortisone reductase deficiency may be recognized by steroid profiling using chromatographic mass spectrometric methods. These methods are highly specific and sensitive, and provide a complete spectrum of steroid metabolites in a single measurement of one sample which makes them superior to immunoassays. The steroid metabolome during the fetal-neonatal transition is characterized by a) the metabolites of the fetal-placental unit at birth, b) the fetal adrenal androgens until its involution 3-6 months postnatally, and c) the steroid metabolites produced by the developing endocrine organs. All these developmental events change the steroid metabolome in an age- and sex-dependent manner during the first year of life. OBJECTIVE The aim of this study was to provide normative values for the urinary steroid metabolome of healthy newborns at short time intervals in the first year of life. METHODS We conducted a prospective, longitudinal study to measure 67 urinary steroid metabolites in 21 male and 22 female term healthy newborn infants at 13 time-points from week 1 to week 49 of life. Urine samples were collected from newborn infants before discharge from hospital and from healthy infants at home. Steroid metabolites were measured by gas chromatography-mass spectrometry (GC-MS) and steroid concentrations corrected for urinary creatinine excretion were calculated. RESULTS 61 steroids showed age and 15 steroids sex specificity. Highest urinary steroid concentrations were found in both sexes for progesterone derivatives, in particular 20α-DH-5α-DH-progesterone, and for highly polar 6α-hydroxylated glucocorticoids. The steroids peaked at week 3 and decreased by ∼80% at week 25 in both sexes. The decline of progestins, androgens and estrogens was more pronounced than of glucocorticoids whereas the excretion of corticosterone and its metabolites and of mineralocorticoids remained constant during the first year of life. CONCLUSION The urinary steroid profile changes dramatically during the first year of life and correlates with the physiologic developmental changes during the fetal-neonatal transition. Thus detailed normative data during this time period permit the use of steroid profiling as a powerful diagnostic tool.
Resumo:
The mental speed approach explains individual differences in intelligence by faster information processing in individuals with higher compared to lower intelligence - especially in elementary cognitive tasks (ECTs). One of the most examined ECTs is the Hick paradigm. The present study aimed to contrast reaction time (RT) and P3 latency in a Hick task as predictors of intelligence. Although both, RT and P3 latency, are commonly used as indicators of mental speed, it is also known that they measure different aspects of information processing. Participants were 113 female students. RT and P3 latency were measured while participants completed the Hick task with four levels of complexity. Intelligence was assessed with Cattell's Culture Fair Test. A RT factor and a P3 factor were extracted by employing a PCA across complexity levels. There was no significant correlation between the factors. Commonality analysis was used to determine the proportions of unique and shared variance in intelligence explained by the RT and P3 latency factors. RT and P3 latency explained 5.5% and 5% of unique variance in intelligence. However, the two speed factors did not explain a significant portion of shared variance. This result suggests that RT and P3 latency in the Hick paradigm are measuring different aspects of information processing that explain different parts of variance in intelligence.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
A clear statement in these lines textually cited (Byers et al., 1938) defines the framework of this special issue: “True soil is the product of the action of climate and living organism upon the parent material, as conditioned by the local relief. The length of time during which these forces are operative is of great importance in determining the character of the ultimate product. Drainage conditions are also important and are controlled by local relief, by the nature of the parent material or underlying rock strata, or by the amount of precipitation in relation to rate of percolation and runoff water. There are, therefore, five principal factors of soil formation: Parent material, climate, biological activity, relief and time. These soil forming factors are interdependent, each modifying the effectiveness of the others.” Due to these various processes associated to its formation and genesis soil dynamics reveals high complexity that creates several levels of structure using this term in a broad sense
Resumo:
Magnetoencephalography (MEG) allows the real-time recording of neural activity and oscillatory activity in distributed neural networks. We applied a non-linear complexity analysis to resting-state neural activity as measured using whole-head MEG. Recordings were obtained from 20 unmedicated patients with major depressive disorder and 19 matched healthy controls. Subsequently, after 6 months of pharmacological treatment with the antidepressant mirtazapine 30 mg/day, patients received a second MEG scan. A measure of the complexity of neural signals, the Lempel–Ziv Complexity (LZC), was derived from the MEG time series. We found that depressed patients showed higher pre-treatment complexity values compared with controls, and that complexity values decreased after 6 months of effective pharmacological treatment, although this effect was statistically significant only in younger patients. The main treatment effect was to recover the tendency observed in controls of a positive correlation between age and complexity values. Importantly, the reduction of complexity with treatment correlated with the degree of clinical symptom remission. We suggest that LZC, a formal measure of neural activity complexity, is sensitive to the dynamic physiological changes observed in depression and may potentially offer an objective marker of depression and its remission after treatment.
Resumo:
Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.
Resumo:
This research proposes a generic methodology for dimensionality reduction upon time-frequency representations applied to the classification of different types of biosignals. The methodology directly deals with the highly redundant and irrelevant data contained in these representations, combining a first stage of irrelevant data removal by variable selection, with a second stage of redundancy reduction using methods based on linear transformations. The study addresses two techniques that provided a similar performance: the first one is based on the selection of a set of the most relevant time?frequency points, whereas the second one selects the most relevant frequency bands. The first methodology needs a lower quantity of components, leading to a lower feature space; but the second improves the capture of the time-varying dynamics of the signal, and therefore provides a more stable performance. In order to evaluate the generalization capabilities of the methodology proposed it has been applied to two types of biosignals with different kinds of non-stationary behaviors: electroencephalographic and phonocardiographic biosignals. Even when these two databases contain samples with different degrees of complexity and a wide variety of characterizing patterns, the results demonstrate a good accuracy for the detection of pathologies, over 98%.The results open the possibility to extrapolate the methodology to the study of other biosignals.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
During the last years cities around the world have invested important quantities of money in measures for reducing congestion and car-trips. Investments which are nothing but potential solutions for the well-known urban sprawl phenomenon, also called the “development trap” that leads to further congestion and a higher proportion of our time spent in slow moving cars. Over the path of this searching for solutions, the complex relationship between urban environment and travel behaviour has been studied in a number of cases. The main question on discussion is, how to encourage multi-stop tours? Thus, the objective of this paper is to verify whether unobserved factors influence tour complexity. For this purpose, we use a data-base from a survey conducted in 2006-2007 in Madrid, a suitable case study for analyzing urban sprawl due to new urban developments and substantial changes in mobility patterns in the last years. A total of 943 individuals were interviewed from 3 selected neighbourhoods (CBD, urban and suburban). We study the effect of unobserved factors on trip frequency. This paper present the estimation of an hybrid model where the latent variable is called propensity to travel and the discrete choice model is composed by 5 alternatives of tour type. The results show that characteristics of the neighbourhoods in Madrid are important to explain trip frequency. The influence of land use variables on trip generation is clear and in particular the presence of commercial retails. Through estimation of elasticities and forecasting we determine to what extent land-use policy measures modify travel demand. Comparing aggregate elasticities with percentage variations, it can be seen that percentage variations could lead to inconsistent results. The result shows that hybrid models better explain travel behavior than traditional discrete choice models.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.