910 resultados para Production Inventory Model with Switching Time
Resumo:
Even though the Standard Model with a Higgs mass mH = 125GeV possesses no bulk phase transition, its thermodynamics still experiences a "soft point" at temperatures around T = 160GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial "structure" visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T > 160GeV.
Resumo:
The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism) depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.
Resumo:
Detecting lame cows is important in improving animal welfare. Automated tools are potentially useful to enable identification and monitoring of lame cows. The goals of this study were to evaluate the suitability of various physiological and behavioral parameters to automatically detect lameness in dairy cows housed in a cubicle barn. Lame cows suffering from a claw horn lesion (sole ulcer or white line disease) of one claw of the same hind limb (n=32; group L) and 10 nonlame healthy cows (group C) were included in this study. Lying and standing behavior at night by tridimensional accelerometers, weight distribution between hind limbs by the 4-scale weighing platform, feeding behavior at night by the nose band sensor, and heart activity by the Polar device (Polar Electro Oy, Kempele, Finland) were assessed. Either the entire data set or parts of the data collected over a 48-h period were used for statistical analysis, depending upon the parameter in question. The standing time at night over 12 h and the limb weight ratio (LWR) were significantly higher in group C as compared with group L, whereas the lying time at night over 12 h, the mean limb difference (△weight), and the standard deviation (SD) of the weight applied on the limb taking less weight were significantly lower in group C as compared with group L. No significant difference was noted between the groups for the parameters of heart activity and feeding behavior at night. The locomotion score of cows in group L was positively correlated with the lying time and △weight, whereas it was negatively correlated with LWR and SD. The highest sensitivity (0.97) for lameness detection was found for the parameter SD [specificity of 0.80 and an area under the curve (AUC) of 0.84]. The highest specificity (0.90) for lameness detection was present for Δweight (sensitivity=0.78; AUC=0.88) and LWR (sensitivity=0.81; AUC=0.87). The model considering the data of SD together with lying time at night was the best predictor of cows being lame, accounting for 40% of the variation in the likelihood of a cow being lame (sensitivity=0.94; specificity=0.80; AUC=0.86). In conclusion, the data derived from the 4-scale-weighing platform, either alone or combined with the lying time at night over 12 h, represent the most valuable parameters for automated identification of lame cows suffering from a claw horn lesion of one individual hind limb.
Resumo:
The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^
Resumo:
The 3-hydroxy-3methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors, or statins, can achieve significant reductions in plasma low-density lipoprotein (LDL)-cholesterol levels. Experimental and clinical evidence now shows that some statins interfere with formation of atherosclerotic lesions independent of their hypolipidemic properties. Vulnerable plaque rupture can result in thrombus formation and artery occlusion; this plaque deterioration is responsible for most acute coronary syndromes, including myocardial infarction (MI), unstable angina, and coronary death, as well as coronary heart diseaseequivalent non-hemorrhagic stroke. Inhibition of HMG-CoA reductase has potential pleiotropic effects other than lipid-lowering, as statins block mevalonic acid production, a precursor to cholesterol and numerous other metabolites. Statins' beneficial effects on clinical events may also thus involve nonlipid-related mechanisms that modify endothelial function, inflammatory responses, plaque stability, and thrombus formation. Aspirin, routinely prescribed to post-MI patients as adjunct therapy, may potentiate statins beneficial effects, as aspirin does not compete metabolically with statins but acts similarly on atherosclerotic lesions. Common functions of both medications include inhibition of platelet activity and aggregation, reduction in atherosclerotic plaque macrophage cell count, and prevention of atherosclerotic vessel endothelial dysfunction. The Cholesterol and Recurrent Events (CARE) trial provides an ideal population in which to examine the combined effects of pravastatin and aspirin. Lipid levels, intermediate outcomes, are examined by pravastatin and aspirin status, and differences between the two pravastatin groups are found. A modified Cox proportional-hazards model with aspirin as a time-dependent covariate was used to determine the effect of aspirin and pravastatin on the clinical cardiovascular composite endpoint of coronary heart disease death, recurrent MI or stroke. Among those assigned to pravastatin, use of aspirin reduced the composite primary endpoint by 35%; this result was similar by gender, race, and diabetic status. Older patients demonstrated a nonsignificant 21% reduction in the primary outcome, whereas the younger had a significant reduction of 43% in the composite primary outcome. Secondary outcomes examined include coronary artery bypass graft (38% reduction), nonsurgical bypass, peripheral vascular disease, and unstable angina. Pravastatin and aspirin in a post-MI population was found to be a beneficial combination that seems to work through lipid and nonlipid, anti-inflammatory mechanisms. ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
This paper assesses the impact of climate change on China's agricultural production at a cross-provincial level using the Ricardian approach, incorporating a multilevel model with farm-level group data. The farm-level group data includes 13379 farm households, across 316 villages, distributed in 31 provinces. The empirical results show that, firstly, the marginal effects and elasticities of net crop revenue per hectare with respect to climate factors indicated that the annual impact of temperature on net crop revenue per hectare was positive, and the effect of increased precipitation was negative when looking at the national totals; secondly, the total impact of simulated climate change scenarios on net crop revenues per hectare at a Chinese national total level, was an increase of between 79 USD per hectare and 207 USD per hectare for the 2050s, and an increase from 140 USD per hectare to 355 USD per hectare for the 2080s. As a result, climate change may create a potential advantage for the development of Chinese agriculture, rather than a risk, especially for agriculture in the provinces of the Northeast, Northwest and North regions. However, the increased precipitation can lead to a loss of net crop revenue per hectare, especially for the provinces of the Southwest, Northwest, North and Northeast regions.
Resumo:
Secchi depth is a measure of water transparency. In the Baltic Sea region, Secchi depth maps are used to assess eutrophication and as input for habitat models. Due to their spatial and temporal coverage, satellite data would be the most suitable data source for such maps. But the Baltic Sea's optical properties are so different from the open ocean that globally calibrated standard models suffer from large errors. Regional predictive models that take the Baltic Sea's special optical properties into account are thus needed. This paper tests how accurately generalized linear models (GLMs) and generalized additive models (GAMs) with MODIS/Aqua and auxiliary data as inputs can predict Secchi depth at a regional scale. It uses cross-validation to test the prediction accuracy of hundreds of GAMs and GLMs with up to 5 input variables. A GAM with 3 input variables (chlorophyll a, remote sensing reflectance at 678 nm, and long-term mean salinity) made the most accurate predictions. Tested against field observations not used for model selection and calibration, the best model's mean absolute error (MAE) for daily predictions was 1.07 m (22%), more than 50% lower than for other publicly available Baltic Sea Secchi depth maps. The MAE for predicting monthly averages was 0.86 m (15%). Thus, the proposed model selection process was able to find a regional model with good prediction accuracy. It could be useful to find predictive models for environmental variables other than Secchi depth, using data from other satellite sensors, and for other regions where non-standard remote sensing models are needed for prediction and mapping. Annual and monthly mean Secchi depth maps for 2003-2012 come with this paper as Supplementary materials.
Resumo:
Stable isotope analysis of two species (or groups of species) of planktonic foraminifers: Globigerinoides ruber (or G. obliquus and G. obliquus extremus) and Globigerina bulloides (or G. falconensis and G. obesa) from ODP Hole 653A and Site 654 in the Tyrrhenian basin, records the Pliocene-Pleistocene glacial history of the Northern Hemisphere. The overall increase in mean d18O values through the interval 4.6-0.08 Ma is 1.7 per mil for G. bulloides and 1.5 per mil for G. ruber. The time interval 3.1-2.5 Ma corresponds to an important phase of 18O enrichment for planktonic foraminifers. In this interval, glacial d18O values of both species G. bulloides and G. ruber increase by about l per mil, this increase being more progressive for G. ruber than for G. bulloides. The increase of interglacial d18O values is higher for G. bulloides (1.5 per mil) than for the Gruber group (1 per mil). These data suggest a more pronounced seasonal stratification of the water masses during interglacial phases. Large positive d18O fluctuations of increasing magnitude are also recorded at 2.25 and 2.15 Ma by G bulloides and appear to be diachronous with those of Site 606 in the Atlantic Ocean. Other events of increasing d18O values are recorded between 1.55 and 1.3 Ma, at 0.9 Ma, 0.8 Ma, and near 0.34 Ma. In the early Pliocene the d18O variability recorded by the planktonic species G. bulloides was higher in the Mediterranean than in the Atlantic at the same latitude. This suggests that important cyclic variations in the water budget of the Mediterranean occurred since that time. Step increases in the d18O variability are synchronous with those of the open ocean at 0.9 and 0.34 Ma. The higher variability as well as the higher amplitude of the peaks of 18O enrichment may be partly accounted for by increase of dryness over the Mediterranean area. In particular the high amplitude d18O fluctuations recorded between 3.1 and 2.1 Ma are correlated with the onset of a marked seasonal contrast and a summer dryness, revealed by pollen analyses. Strong fluctuations towards d13C values higher than modern ones are recorded by the G. ruber group species before 1.7 Ma and suggest a high production of phytoplankton. When such episodes of high primary production are correlated with episodes of decreasing 13C content of G. bulloides, they are interpreted as the consequence of a higher stratification of the upper water masses resulting itself from a marked seasonality. Such episodes occur between 4.6 and 4.05 Ma, 3.9 and 3.6 Ma, and 3.25 and 2.66 Ma. The interval 2.66-1.65 Ma corresponds to a weakening of the stratification of the upper water layers. This may be related to episodes of cooling and increasing dryness induced by the Northern Hemisphere Glaciations. The Pleistocene may have been a less productive period. The transition from highly productive to less productive surface waters also coincides with a new step increase in dryness and cooling, between 1.5 and 1.3 Ma. The comparison of the 13C records of G ruber and G. bulloides in fact suggests that a high vertical convection became a dominant feature after 2.6 Ma. Increases in the nutrient input and the stratification of the upper water masses may be suspected, however, during short episodes near 0.86 Ma (isotopic stage 25), 0.57-0.59 Ma (isotopic stage 16), 0.49 Ma (isotopic stage 13), 0.4-0.43 Ma (isotopic stage 11), and 0.22 and 0.26 Ma (part of isotopic stage 7 and transition 7/8). In fact, changes in the C02 balance within the different water masses of the Tyrrhenian basin as well as in the local primary production did not follow the general patterns of the open ocean.
Resumo:
Alkali phosphatase activity and hydrochemical structure of waters in the Barents and Norwegian seas were investigated. In a sea with the seasonal bioproduction cycle alkali phosphatase activity is also seasonal, rising with trophic level of waters. At the end of hydrological and biological winter activity is practically zero. Alkali phosphatase activity is especially important in summer, when plankton has consumed winter supply of phosphate in the euphotic layer and nutrient limitation of primary production begins. In summer production and destruction cycle, apparent time for recycling of phosphorus by phosphatase in suspended matter in the euphotic layer of the Barents Sea and Norwegian Sea averages from 7 to 30 hours.
Resumo:
Five frequently-used models were chosen and evaluated to calculate the viscosity of the mixed oil. Totally twenty mixed oil samples were prepared with different ratios of light to crude oil from different oil wells but the same oil field. The viscosities of the mixtures under the same shear rates of 10 s**-1 were measured using a rotation viscometer at the temperatures ranging from 30°C to 120°C. After comparing all of the experimental data with the corresponding model values, the best one of the five models for this oil field was determined. Using the experimental data, one model with a better accuracy than the existing models was developed to calculate the viscosity of mixed oils. Another model was derived to predict the viscosity of mixed oils at different temperatures and different values of mixing ratio of light to heavy oil.
Resumo:
One of the key factors behind the growth in global trade in recent decades is an increase in intermediate input as a result of the development of vertical production networks (Feensta, 1998). It is widely recognized that the formation of production networks is due to the expansion of multinational enterprises' (MNEs) activities. MNEs have been differentiated into two types according to their production structure: horizontal and vertical foreign direct investment (FDI). In this paper, we extend the model presented by Zhang and Markusen (1999) to include horizontal and vertical FDI in a model with traded intermediates, using numerical general equilibrium analysis. The simulation results show that horizontal MNEs are more likely to exist when countries are similar in size and in relative factor endowments. Vertical MNEs are more likely to exist when countries differ in relative factor endowments, and trade costs are positive. From the results of the simulation, lower trade costs of final goods and differences in factor intensity are conditions for attracting vertical MNEs.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Study of rapid ionisation for simulation of soft X-ray lasers with the 2D hydro-radiative code ARWEN
Resumo:
We present our fast ionisation routine used to study transient softX-raylasers with ARWEN, a two-dimensional hydrodynamic code incorporating adaptative mesh refinement (AMR) and radiative transport. We compute global rates between ion stages assuming an effective temperature between singly-excited levels of each ion. A two-step method is used to obtain in a straightforward manner the variation of ion populations over long hydrodynamic time steps. We compare our model with existing theoretical results both stationary and transient, finding that the discrepancies are moderate except for large densities. We simulate an existing Molybdenum Ni-like transient softX-raylaser with ARWEN. Use of the fast ionisation routine leads to a larger increase in temperature and a larger gain zone than when LTE datatables are used.
Resumo:
Real time Tritium concentrations in air coming from an ITER-like reactor as source were coupled the European Centre Medium Range Weather Forecast (ECMWF) numerical model with the lagrangian atmospheric dispersion model FLEXPART. This tool ECMWF/FLEXPART was analyzed in normal operating conditions in the Western Mediterranean Basin during 45 days at summer 2010. From comparison with NORMTRI plumes over Western Mediterranean Basin the real time results have demonstrated an overestimation of the corresponding climatologically sequence Tritium concentrations in air outputs, at several distances from the reactor. For these purpose two clouds development patterns were established. The first one was following a cyclonic circulation over the Mediterranean Sea and the second one was based in the cloud delivered over the Interior of the Iberian Peninsula by another stabilized circulation corresponding to a High. One of the important remaining activities defined then, was the tool qualification. The aim of this paper is to present the ECMWF/FLEXPART products confronted with Tritium concentration in air data. For this purpose a database to develop and validate ECMWF/FLEXPART tritium in both assessments has been selected from a NORMTRI run. Similarities and differences, underestimation and overestimation with NORMTRI will allowfor refinement in some features of ECMWF/FLEXPART