978 resultados para input parameter value recommendation
Resumo:
Objective Arterial lactate, base excess (BE), lactate clearance, and Sequential Organ Failure Assessment (SOFA) score have been shown to correlate with outcome in severely injured patients. The goal of the present study was to separately assess their predictive value in patients suffering from traumatic brain injury (TBI) as opposed to patients suffering from injuries not related to the brain. Materials and methods A total of 724 adult trauma patients with an Injury Severity Score (ISS) ≥ 16 were grouped into patients without TBI (non-TBI), patients with isolated TBI (isolated TBI), and patients with a combination of TBI and non-TBI injuries (combined injuries). The predictive value of the above parameters was then analyzed using both uni- and multivariate analyses. Results The mean age of the patients was 39 years (77 % males), with a mean ISS of 32 (range 16–75). Mortality ranged from 14 % (non-TBI) to 24 % (combined injuries). Admission and serial lactate/BE values were higher in non-survivors of all groups (all p < 0.01), but not in patients with isolated TBI. Admission SOFA scores were highest in non-survivors of all groups (p = 0.023); subsequently septic patients also showed elevated SOFA scores (p < 0.01), except those with isolated TBI. In this group, SOFA score was the only parameter which showed significant differences between survivors and non-survivors. Receiver operating characteristic (ROC) analysis revealed lactate to be the best overall predictor for increased mortality and further septic complications, irrespective of the leading injury. Conclusion Lactate showed the best performance in predicting sepsis or death in all trauma patients except those with isolated TBI, and the differences were greatest in patients with substantial bleeding. Following isolated TBI, SOFA score was the only parameter which could differentiate survivors from non-survivors on admission, although the SOFA score, too, was not an independent predictor of death following multivariate analysis.
Resumo:
Many ecosystem models have been developed to study the ocean's biogeochemical properties, but most of these models use simple formulations to describe light penetration and spectral quality. Here, an optical model is coupled with a previously published ecosystem model that explicitly represents two phytoplankton (picoplankton and diatoms) and two zooplankton functional groups, as well as multiple nutrients and detritus. Surface ocean color fields and subsurface light fields are calculated by coupling the ecosystem model with an optical model that relates biogeochemical standing stocks with inherent optical properties (absorption, scattering); this provides input to a commercially available radiative transfer model (Ecolight). We apply this bio-optical model to the equatorial Pacific upwelling region, and find the model to be capable of reproducing many measured optical properties and key biogeochemical processes in this region. Our model results suggest that non-algal particles largely contribute to the total scattering or attenuation (> 50% at 660 nm) but have a much smaller contribution to particulate absorption (< 20% at 440 nm), while picoplankton dominate the total phytoplankton absorption (> 95% at 440 nm). These results are consistent with the field observations. In order to achieve such good agreement between data and model results, however, key model parameters, for which no field data are available, have to be constrained. Sensitivity analysis of the model results to optical parameters reveals a significant role played by colored dissolved organic matter through its influence on the quantity and quality of the ambient light. Coupling explicit optics to an ecosystem model provides advantages in generating: (1) a more accurate subsurface light-field, which is important for light sensitive biogeochemical processes such as photosynthesis and photo-oxidation, (2) additional constraints on model parameters that help to reduce uncertainties in ecosystem model simulations, and (3) model output which is comparable to basic remotely-sensed properties. In addition, the coupling of biogeochemical models and optics paves the road for future assimilation of ocean color and in-situ measured optical properties into the models.
Resumo:
Incident rainfall is a major source of nutrient input to a forest ecosystem and the consequent throughfall and stemflow contribute to nutrient cycling. These rain-based fluxes were measured over 12 mo in two forest types in Korup National Park, Cameroon, one with low (LEM) and one with high (HEM) ectomycorrhizal abundances of trees. Throughfall was 96.6 and 92.4% of the incident annual rainfall (5370 mm) in LEM and HEM forests respectively; stemflow was correspondingly 1.5 and 2.2%. Architectural analysis showed that ln(funneling ratio) declined linearly with increasing ln(basal area) of trees. Mean annual inputs of N, P, K, Mg and Ca in incident rainfall were 1.50, 1.07, 7.77, 5.25 and 9.27 kg ha(-1), and total rain-based inputs to the forest floor were 5.0, 3.2, 123.4, 14.4 and 37.7 kg ha-1 respectively. The value for K is high for tropical forests and that for N is low. Nitrogen showed a significantly lower loading of throughfall and stemflow in HEM than in LEM forest, this being associated in the HEM forest with a greater abundance of epiphytic bryophytes which may absorb more N. Incident rainfall provided c. 35% of the gross input of P to the forest floor (i. e., rain-based plus small litter inputs), a surprisingly high contribution given the sandy P-poor soils. At the start of the wet season leaching of K from the canopy was particularly high. Calcium in the rain was also highest at this time, most likely due to washing off of dry-deposited Harmattan dusts. It is proposed that throughfall has an important `priming' function in the rapid decomposition of litter and mineralization of P at the start of the wet season. The contribution of P inputted from the atmosphere appears to be significant when compared to the rates of P mineralization from leaf litter.
Resumo:
Given a short-arc optical observation with estimated angle-rates, the admissible region is a compact region in the range / range-rate space defined such that all likely and relevant orbits are contained within it. An alternative boundary value problem formulation has recently been proposed where range / range hypotheses are generated with two angle measurements from two tracks as input. In this paper, angle-rate information is reintroduced as a means to eliminate hypotheses by bounding their constants of motion before a more computationally costly Lambert solver or differential correction algorithm is run.
Resumo:
A measurement of the parity-violating decay asymmetry parameter, αb , and the helicity amplitudes for the decay Λ 0 b →J/ψ(μ + μ − )Λ 0 (pπ − ) is reported. The analysis is based on 1400 Λ 0 b and Λ ¯ 0 b baryons selected in 4.6 fb −1 of proton–proton collision data with a center-of-mass energy of 7 TeV recorded by the ATLAS experiment at the LHC. By combining the Λ 0 b and Λ ¯ 0 b samples under the assumption of CP conservation, the value of α b is measured to be 0.30±0.16(stat)±0.06(syst) . This measurement provides a test of theoretical models based on perturbative QCD or heavy-quark effective theory.
Resumo:
The IUPAC-IUGS joint Task Group “Isotopes in Geosciences” recommends a value of (49.61 ± 0.16) Ga for the half life of 87Rb, corresponding to a decay constant λ87 = (1.3972 ± 0.0045) × 10-11 a-1.
Resumo:
A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Utilizing the neutron-irradiation parameter J is one of the major uncertainties in 40Ar/39Ar dating. The associated error of the individual J-value for a sample of unknown age depends on the accuracy of the age of the geological standards, the fast-neutron fluence distribution in the reactor and the distances between standards and samples during irradiation. While it is generally assumed that rotating irradiation evens out radial neutron fluence gradients, we observed axial and radial variations of the J-values in sample irradiations in the rotating channels of two reactors. To quantify them, we included three-dimensionally distributed metallic fast- (Ni) and thermal- (Co) neutron fluence monitors in three irradiations and geological age standards in three more. Two irradiations were carried out under Cd-shielding in the FRG1 reactor in Geesthacht, Germany, and four without Cd-shielding in the LVR-15 reactor in Rez, Czech Republic. The 58Ni(nf,p)58Co activation reaction and ?-spectrometry of the 811 keV peak associated with the subsequent decay of 58Co to 58Fe allow to calculate the fast-neutron fluence. The fast-neutron fluences at known positions in the irradiation container correlate with the J-values determined by mass-spectrometric 40Ar/39Ar measurements of the geological age standards. Ra-dial neutron fluence gradients are up to 1.8 %/cm in FRG1 and up to 2.2 %/cm in LVR-15; the corre-sponding axial gradients are up to 5.9 and 2.1 %/cm. We conclude that sample rotation might not al-ways suffice to meet the needs of high-precision dating and gradient monitoring can be crucial.
Resumo:
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Resumo:
The Asia-Pacific Region has enjoyed remarkable economic growth in the last three decades. This rapid economic growth can be partially attributed to the global spread of production networks, which has brought about major changes in spatial interdependence among economies within the region. By applying an Input-Output based spatial decomposition technique to the Asian International Input-Output Tables for 1985 and 2000, this paper not only analyzes the intrinsic mechanism of spatial economic interdependence, but also shows how value added, employment and CO2 emissions induced are distributed within the international production networks.
Resumo:
International input-output tables are among the most useful tools for economic analysis. Since these tables provide detailed information about international production networks, they have recently attracted considerable attention in research on spatial economics, global value chains, and issues relating to trade in value-added. The Institute of Developing Economies at the Japan External Trade Organization (IDE-JETRO) has more than 40 years of experience in the construction and analysis of international input-output tables. This paper explains the development of IDE-JETRO’s multi-regional input-output projects including the construction of the Asian International Input-Output table and the Transnational Interregional Input-Output table between China and Japan. To help users understand the features of the tables, this paper also gives examples of their application.
Resumo:
Global value chains are supported not only directly by domestic regions that export goods and services to the world market, but also indirectly by other domestic regions that provide parts, components, and intermediate services to final exporting regions. In order to better understand the nature of a country’s position and degree of participation in global value chains, we need to more fully examine the role of individual domestic regions. Understanding the domestic components of global supply chains is especially important for large developing countries like China and India, where there may be large variations in economic scale and development between domestic regions. This paper proposes a new framework for measuring domestic linkages to global value chains. This framework measures domestic linkages by endogenously embedding a country’s domestic interregional input-output (IO) table in an international IO model. Using this framework, we can more clearly describe how global production is fragmented and extended through linkages across a country’s domestic regions. This framework will also enable us to estimate how value added is created and distributed in both domestic and international segments of global value chains. For examining the validity and usefulness of this new approach, some numerical results are presented and discussed based on the 2007 Chinese interregional IO table, China customs statistics at the provincial level, and World Input-Output Tables (WIOTs).
Resumo:
This study focuses on the technological intensity of China's exports. It first introduces the method of decomposing gross exports by using the Asian international input–output tables. The empirical results indicate that the technological intensity of Chinese exports has been significantly overestimated due to its high dependency on import content, especially in high-technology exports, an area highly dominated by the electronic and electrical equipment sector. Furthermore, a significant portion of value added embodied in China's high-technology exports comes from services and high-technology manufacturers in neighboring economies, such as Japan, South Korea, and Taiwan.
Resumo:
This paper integrates two lines of research into a unified conceptual framework: trade in global value chains and embodied emissions. This allows both value added and emissions to be systematically traced at the country, sector, and bilateral levels through various production network routes. By combining value-added and emissions accounting in a consistent way, the potential environmental cost (amount of emissions per unit of value added) along global value chains can be estimated. Using this unified accounting method, we trace CO2 emissions in the global production and trade network among 41 economies in 35 sectors from 1995 to 2009, basing our calculations on the World Input–Output Database, and show how they help us to better understand the impact of cross-country production sharing on the environment.