828 resultados para Continuous time systems
Resumo:
We introduce a multistable subordinator, which generalizes the stable subordinator to the case of time-varying stability index. This enables us to define a multifractional Poisson process. We study properties of these processes and establish the convergence of a continuous-time random walk to the multifractional Poisson process.
Resumo:
The study investigates the role of credit risk in a continuous time stochastic asset allocation model, since the traditional dynamic framework does not provide credit risk flexibility. The general model of the study extends the traditional dynamic efficiency framework by explicitly deriving the optimal value function for the infinite horizon stochastic control problem via a weighted volatility measure of market and credit risk. The model's optimal strategy was then compared to that obtained from a benchmark Markowitz-type dynamic optimization framework to determine which specification adequately reflects the optimal terminal investment returns and strategy under credit and market risks. The paper shows that an investor's optimal terminal return is lower than typically indicated under the traditional mean-variance framework during periods of elevated credit risk. Hence I conclude that, while the traditional dynamic mean-variance approach may indicate the ideal, in the presence of credit-risk it does not accurately reflect the observed optimal returns, terminal wealth and portfolio selection strategies.
Resumo:
This work represents the proceedings of the fifteenth symposium which convened at Colorado State University on May 24, 1985. The two day meeting was scheduled one month later than usual, i.e., after the spring semester, so that travelers from the Midwest (Iowa State University, Kansas State University and University of Missouri) could enjoy the unique mountain setting provided at Pingree Park. The background of the photograph on the cover depicts the beauty of the area. ContentsGreg Sinton and S.M. Leo, KSU. Models for the Biodegration of 2.4-D and Related Xenobiotic Compounds. V. Bringi, CSU. Intrinsic Kinetics from a Novel Immobilized Cell CSTR. Steve Birdsell, CU. Novel Microbial Separation Techniques. Mark Smith, MU. Kinetic Characterization of Growth of E. coli on Glucose. Michael M. Meagher, ISU. Kinetic Parameters of Di- and Trisaccharaide Hydrolysis by Glucoamylase II. G.T. Jones and A.K. Ghosh Hajra, KSU. Modeling and Simulation of Legume Modules with Reactive Cores and Inert Shells. S.A. Patel and C.H. Lee, KSU. Energetic Analysis and Liquid Circulation in an Airlift Fermenter. Rod R. Fisher, ISU. The Effects of Mixing during Acid Addition of Fractionally Precipitated Protein. Mark M. Paige, CSU. Fed-batch Fermentations of Clostridium acetobutylicum. Michael K. Dowd, ISU. A Nonequilibirium Thermodynamic Description of the Variation of Contractile Velocity and Energy Use in Muscle. David D. Drury, CSU. Analysis of Hollow Fiber Bioreactor Performance for MAmmalian Cells by On-Line MMR. H.Y. Lee, KSU. Process Analysis of Photosynthetic Continuous Culture Systems. C.J. Wang, MU. Kinetic Consideration in Fermentation of Cheese Whey to Ethanol.
Resumo:
Five sections drilled in multiple holes over a depth transect of more than 2200 m at the Walvis Ridge (SE Atlantic) during Ocean Drilling Program (ODP) Leg 208 resulted in the first complete early Paleogene deep-sea record. Here we present high-resolution stratigraphic records spanning a ~4.3 million yearlong interval of the late Paleocene to early Eocene. This interval includes the Paleocene-Eocene thermal maximum (PETM) as well as the Eocene thermal maximum (ETM) 2 event. A detailed chronology was developed with nondestructive X-ray fluorescence (XRF) core scanning records and shipboard color data. These records were used to refine the shipboard-derived spliced composite depth for each site and with a record from ODP Site 1051 were then used to establish a continuous time series over this interval. Extensive spectral analysis reveals that the early Paleogene sedimentary cyclicity is dominated by precession modulated by the short (100 kyr) and long (405 kyr) eccentricity cycles. Counting of precession-related cycles at multiple sites results in revised estimates for the duration of magnetochrons C24r and C25n. Direct comparison between the amplitude modulation of the precession component derived from XRF data and recent models of Earth's orbital eccentricity suggests that the onset of the PETM and ETM2 are related to a 100-kyr eccentricity maximum. Both events are approximately a quarter of a period offset from a maximum in the 405-kyr eccentricity cycle, with the major difference that the PETM is lagging and ETM2 is leading a 405-kyr eccentricity maximum. Absolute age estimates for the PETM, ETM2, and the magnetochron boundaries that are consistent with recalibrated radiometric ages and recent models of Earth's orbital eccentricity cannot be precisely determined at present because of too large uncertainties in these methods. Nevertheless, we provide two possible tuning options, which demonstrate the potential for the development of a cyclostratigraphic framework based on the stable 405-kyr eccentricity cycle for the entire Paleogene.
Resumo:
Ocean acidification and warming are expected to threaten the persistence of tropical coral reef ecosystems. As coral reefs face multiple stressors, the distribution and abundance of corals will depend on the successful dispersal and settlement of coral larvae under changing environmental conditions. To explore this scenario, we used metabolic rate, at holobiont and molecular levels, as an index for assessing the physiological plasticity of Pocillopora damicornis larvae from this site to conditions of ocean acidity and warming. Larvae were incubated for 6 hours in seawater containing combinations of CO2 concentration (450 and 950 µatm) and temperature (28 and 30°C). Rates of larval oxygen consumption were higher at elevated temperatures. In contrast, high CO2 levels elicited depressed metabolic rates, especially for larvae released later in the spawning period. Rates of citrate synthase, a rate-limiting enzyme in aerobic metabolism, suggested a biochemical limit for increasing oxidative capacity in coral larvae in a warming, acidifying ocean. Biological responses were also compared between larvae released from adult colonies on the same day (cohorts). The metabolic physiology of Pocillopora damicornis larvae varied significantly by day of release. Additionally, we used environmental data collected on a reef in Moorea, French Polynesia to provide information about what adult corals and larvae may currently experience in the field. An autonomous pH sensor provided a continuous time series of pH on the natal fringing reef. In February/March, 2011, pH values averaged 8.075±0.023. Our results suggest that without adaptation or acclimatization, only a portion of naïve Pocillopora damicornis larvae may have suitable metabolic phenotypes for maintaining function and fitness in an end-of-the century ocean.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
Criminals are common to all societies. To fight against them the community takes different security measures as, for example, to bring about a police. Thus, crime causes a depletion of the common wealth not only by criminal acts but also because the cost of hiring a police force. In this paper, we present a mathematical model of a criminal-prone self-protected society that is divided into socio-economical classes. We study the effect of a non-null crime rate on a free-of-criminals society which is taken as a reference system. As a consequence, we define a criminal-prone society as one whose free-of-criminals steady state is unstable under small perturbations of a certain socio-economical context. Finally, we compare two alternative strategies to control crime: (i) enhancing police efficiency, either by enlarging its size or by updating its technology, against (ii) either reducing criminal appealing or promoting social classes at risk
Resumo:
Abstract. The ASSERT project de?ned new software engineering methods and tools for the development of critical embedded real-time systems in the space domain. The ASSERT model-driven engineering process was one of the achievements of the project and is based on the concept of property- preserving model transformations. The key element of this process is that non-functional properties of the software system must be preserved during model transformations. Properties preservation is carried out through model transformations compliant with the Ravenscar Pro?le and provides a formal basis to the process. In this way, the so-called Ravenscar Computational Model is central to the whole ASSERT process. This paper describes the work done in the HWSWCO study, whose main objective has been to address the integration of the Hardware/Software co-design phase in the ASSERT process. In order to do that, non-functional properties of the software system must also be preserved during hardware synthesis. Keywords : Ada 2005, Ravenscar pro?le, Hardware/Software co-design, real- time systems, high-integrity systems, ORK
Resumo:
En el presente Trabajo de Fin de Grado se abordan diferentes aspectos del diseño, implementación y vericación de un sistema de tiempo real de características especiales, el satélite UPMSat-2. Este proyecto, llevado a cabo por un grupo de trabajo formado por profesores, alumnos y personal de la Universidad Politecnica de Madrid (UPM), tiene como objetivo el desarrollo de un microsatélite como plataforma de demostración tecnológica en órbita. Parte de este grupo de trabajo es el Grupo de Sistemas de Tiempo Real y Arquitectura de Servicios Telemáticos (STRAST), del cual el alumno forma parte y que tiene a cargo el diseño e implementación tanto del software de abordo como del software de tierra. Dentro de estas asignaciones, el alumno ha trabajado en tres aspectos principales: el diseño e implementación de diferentes manejadores de dispositivos, el diseño de un algoritmo para la gestión de la memoria no volátil y la configuración y prueba de un sistema de validación de software para un subsistema del satélite. Tanto la memoria de estas tareas como las bases y fundamentos tecnológicos aplicados se desarrollan en el documento. ------------------------------------------------ ----------------------------------------------------------------------------------- Diferent aspects of the design, implementation and validation of an specific Real Time System, the UPMSat-2 satellite, are described in this final report. UPMSat-2 project is aimed at developing an experimental microsatellite that can be used as a technology demonstrator for several research groups at UPM. The Real-Time Systems Group at UPM (STRAST) is responsible for designing and building all the on-board and ground-segment software for the satellite. In relation to this, three main task have been carried out and are described in this document: the design and implementation of three diferent device drivers, the design of an algorithm to manage the nonvolatile memory and the configuration and test of a software validation facility to test the UPMSat-2 Attitude Determination and Control System (ADCS) subsystem. Detailed information of these tasks and their technological basis are presented in the rest of the document.
Resumo:
Los sistemas de tiempo real tienen un papel cada vez más importante en nuestra sociedad. Constituyen un componente fundamental de los sistemas de control, que a su vez forman parte de diversos sistemas de ingeniería básicos en actividades industriales, militares, de comunicaciones, espaciales y médicas. La planificación de recursos es un problema fundamental en la realización de sistemas de tiempo real. Su objetivo es asignar los recursos disponibles a las tareas de forma que éstas cumplan sus restricciones temporales. Durante bastante tiempo, el estado de la técnica en relación con los métodos de planificación ha sido rudimentario. En la actualidad, los métodos de planificación basados en prioridades han alcanzado un nivel de madurez suficiente para su aplicación en entornos industriales. Sin embargo, hay cuestiones abiertas que pueden dificultar su utilización. El objetivo principal de esta tesis es estudiar los métodos de planificación basados en prioridades, detectar las cuestiones abiertas y desarrollar protocolos, directrices y esquemas de realización práctica que faciliten su empleo en sistemas industriales. Una cuestión abierta es la carencia de esquemas de realización de algunos protocolos con núcleos normalizados. El resultado ha sido el desarrollo de esquemas de realización de tareas periódicas y esporádicas de tiempo real, con detección de fallos de temporización, comunicación entre tareas, cambio de modo de ejecución del sistema y tratamiento de fallos mediante grupos de recuperación. Los esquemas se han codificado en Ada 9X y se proporcionan directrices para analizar la planificabilidad de un sistema desarrollado con esta base. Un resultado adicional ha sido la identificación de la funcionalidad mínima necesaria para desarrollar sistemas de tiempo real con las características enumeradas. La capacidad de adaptación a los cambios del entorno es una característica deseable de los sistemas de tiempo real. Si estos cambios no estaban previstos en la fase de diseño o si hay módulos erróneos, es necesario modificar o incluir algunas tareas. La actualización del sistema se suele realizar estáticamente y su instalación se lleva a cabo después de parar su ejecución. Sin embargo, hay sistemas cuyo funcionamiento no se puede detener sin producir daños materiales o económicos. Una alternativa es diseñar el sistema como un conjunto de unidades que se pueden reemplazar, sin interferir con la ejecución de otras unidades. Para tal fin, se ha desarrollado un protocolo de reemplazamiento dinámico para sistemas de tiempo real crítico y se ha comprobado su compatibilidad con los métodos de planificación basados en prioridades. Finalmente se ha desarrollado un esquema de realización práctica del protocolo.---ABSTRACT---Real-time systems are very important now a days. They have become a relevant issue in the design of control systems, which are a basic component of several engineering systems in industrial, telecommunications, military, spatial and medical applications. Resource scheduling is a central issue in the development of real-time systems. Its purpose is to assign the available resources to the tasks, in such a way that their deadlines are met. Historically, hand-crafted techniques were used to develop real-time systems. Recently, the priority-based scheduling methods have reached a sufficient maturity level to be feasible its extensive use in industrial applications. However, there are some open questions that may decrease its potential usefulness. The main goal of this thesis is to study the priority-based scheduling methods, to identify the remaining open questions and to develop protocols, implementation templates and guidelines that will make more feasible its use in industrial applications. One open question is the lack of implementation schemes, based on commercial realtime kernels, of some of the protocols. POSIX and Ada 9X has served to identify the services usually available. A set of implementation templates for periodic and sporadic tasks have been developed with provisión for timing failure detection, intertask coraraunication, change of the execution mode and failure handling based on recovery groups. Those templates have been coded in Ada 9X. A set of guidelines for checking the schedulability of a system based on them are also provided. An additional result of this work is the identification of the minimal functionality required to develop real-time systems based on priority scheduling methods, with the above characteristics. A desirable feature of real-time systems is their capacity to adapt to changes in the environment, that cannot be entirely predicted during the design, or to misbehaving software modules. The traditional maintenance techniques are performed by stopping the whole system, installing the new application and finally resuming the system execution. However this approach cannot be applied to non-stop systems. An alternative is to design the system as a set of software units that can be dynamically replaced within its operative environment. With this goal in mind, a dynamic replacement protocol for hard real-time systems has been defined. Its compatibility with priority-based scheduling methods has been proved. Finally, a execution témplate of the protocol has been implemented.
Resumo:
For taxonomic levels higher than species, the abundance distributions of the number of subtaxa per taxon tend to approximate power laws but often show strong deviations from such laws. Previously, these deviations were attributed to finite-time effects in a continuous-time branching process at the generic level. Instead, we describe herein a simple discrete branching process that generates the observed distributions and find that the distribution's deviation from power law form is not caused by disequilibration, but rather that it is time independent and determined by the evolutionary properties of the taxa of interest. Our model predicts—with no free parameters—the rank-frequency distribution of the number of families in fossil marine animal orders obtained from the fossil record. We find that near power law distributions are statistically almost inevitable for taxa higher than species. The branching model also sheds light on species-abundance patterns, as well as on links between evolutionary processes, self-organized criticality, and fractals.
Resumo:
Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.
Resumo:
We explore the role of business services in knowledge accumulation and growth and the determinants of knowledge diffusion including the role of distance. A continuous time model is estimated on several European countries, Japan, and the US. Policy simulations illustrate the benefits for EU growth of the deepening of the single market, the reduction of regulatory barriers, and the accumulation of technology and human capital. Our results support the basic insights of the Lisbon Agenda. Economic growth in Europe is enhanced to the extent that: trade in services increases, technology accumulation and diffusion increase, regulation becomes both less intensive and more uniform across countries, and human capital accumulation increases in all countries.
Resumo:
We study the impact of the different stages of human capital accumulation on the evolution of labor productivity in a model calibrated to the U.S. from 1961 to 2008. We add early childhood education to a standard continuous time life cycle economy and assume complementarity between educational stages. There are three sectors in the model: the goods sector, the early childhood sector and the formal education sector. Agents are homogenous and choose the intensity of preschool education, how long to stay in formal school, labor effort and consumption, and there are exogenous distortions to these four decisions. The model matches the data very well and closely reproduces the paths of schooling, hours worked, relative prices and GDP. We find that the reduction in distortions to early education in the period was large and made a very strong contribution to human capital accumulation. However, due to general equilibrium effects of labor market taxation, marginal modification in the incentives for early education in 2008 had a smaller impact than those for formal education. This is because the former do not decisively affect the decision to join the labor market, while the latter do. Without labor taxation, incentives for preschool are significantly stronger.
Resumo:
This dataset contains continuous time series of land surface temperature (LST) at spatial resolution of 300m around the 12 experimental sites of the PAGE21 project (grant agreement number 282700, funded by the EC seventh Framework Program theme FP7-ENV-2011). This dataset was produced from hourly LST time series at 25km scale, retrieved from SSM/I data (André et al., 2015, doi:10.1016/j.rse.2015.01.028) and downscaled to 300m using a dynamic model and a particle smoothing approach. This methodology is based on two main assumptions. First, LST spatial variability is mostly explained by land cover and soil hydric state. Second, LST is unique for a land cover class within the low resolution pixel. Given these hypotheses, this variable can be estimated using a land cover map and a physically based land surface model constrained with observations using a data assimilation process. This methodology described in Mechri et al. (2014, doi:10.1002/2013JD020354) was applied to the ORCHIDEE land surface model (Krinner et al., 2005, doi:10.1029/2003GB002199) to estimate prior values of each land cover class provided by the ESA CCI-Land Cover product (Bontemps et al., 2013) at 300m resolution . The assimilation process (particle smoother) consists in simulating ensemble of LST time series for each land cover class and for a large number of parameter sets. For each parameter set, the resulting temperatures are aggregated considering the grid fraction of each land cover and compared to the coarse observations. Miniminizing the distance between the aggregated model solutions and the observations allow us to select the simulated LST and the corresponding parameter sets which fit the observations most closely. The retained parameter sets are then duplicated and randomly perturbed before simulating the next time window. At the end, the most likely LST of each land cover class are estimated and used to reconstruct LST maps at 300m resolution using ESA CCI-Land Cover. The resulting temperature maps on which ice pixels were masked, are provided at daily time step during the nine-year analysis period (2000-2009).