930 resultados para General Independent Value model
Resumo:
The business value of information technology (IT) is increasingly being cocreated by multiple parties, opening opportunities for new research initiatives. Previous studies on IT value cocreation mainly focus on analyzing sources of cocreated IT value, yet inadequately accommodating the influence of competition relationships in IT value cocreation activities. To fill the gap, this in-progress paper suggests an agent-based modeling (also simulation) approach to investigating potential influences of the dynamic interplay between cooperation and competition relationships in IT value cocreation settings. In particular, the research proposes a high-level conceptual framework to position general IT value cocreation processes. A relational network view is offered, aiming at decomposing and systemizing several typical cooperation and competition scenarios in practical IT value cocreation settings. The application of a simulation approach to analytical insights and to theory building is illustrated.
Resumo:
One of the problems to be solved in attaining the full potentials of hematopoietic stem cell (HSC) applications is the limited availability of the cells. Growing HSCs in a bioreactor offers an alternative solution to this problem. Besides, it also offers the advantages of eliminating labour intensive process as well as the possible contamination involved in the periodic nutrient replenishments in the traditional T-flask stem cell cultivation. In spite of this, the optimization of HSC cultivation in a bioreactor has been barely explored. This manuscript discusses the development of a mathematical model to describe the dynamics in nutrient distribution and cell concentration of an ex vivo HSC cultivation in a microchannel perfusion bioreactor. The model was further used to optimize the cultivation by proposing three alternative feeding strategies in order to prevent the occurrence of nutrient limitation in the bioreactor. The evaluation of these strategies, the periodic step change increase in the inlet oxygen concentration, the periodic step change increase in the media inflow, and the feedback control of media inflow, shows that these strategies can successfully improve the cell yield of the bioreactor. In general, the developed model is useful for the design and optimization of bioreactor operation.
Resumo:
Non-parametric difference tests such as triangle and duo-trio tests traditionally are used to establish differences or similarities between products. However they only supply the researcher with partial answers and often further testing is required to establish the nature, size and direction of differences. This paper looks at the advantages of the difference from control (DFC) test (also known as degree of difference test) and discusses appropriate applications of the test. The scope and principle of the test, panel composition and analysis of results are presented with the aid of suitable examples. Two of the major uses of the DFC test are in quality control and shelf-life testing. The role DFC takes in these areas and the use of other tests to complement the testing is discussed. Controls or standards are important in both these areas and the use of standard products, mental and written standards and blind controls are highlighted. The DFC test has applications in products where the duo-trio and triangle tests cannot be used because of the normal heterogeneity of the product. While the DFC test is a simple difference test it can be structured to give the researcher more valuable data and scope to make informed decisions about their product.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
ENGLISH: Catches of skipjack tuna supporting major fisheries in parts of the western, central and eastern Pacific Ocean have increased in recent years; thus, it is important to examine the dynamics of the fishery to determine man's effect on the abundance of the stocks. A general linear hypothesis model was developed to standardize fishing effort to a single vessel size and gear type. Standardized effort was then used to compute an index of abundance which accounts for seasonal variability in the fishing area. The indices of abundance were highly variable from year to year in both the northern and southern areas of the fishery but indicated a generally higher abundance in the south. Data from 438 fish tagged and recovered in the eastern Pacific Ocean were used to compute growth curves. A least-squares technique was used to estimate the parameters of the von Bertalanffy growth function. Two estimates of the parameters were made by analyzing the same data in different ways. For the first set of estimates, K= 0.819 on an annual instantaneous basis and L= 729 mm; for the second, K = 0.431 and L=881. These compared well with estimates derived using the Chapman-Richards growth function, which includes the von Bertalanffy function as a special case. It was concluded that the latter function provided an adequate empirical fit to the skipjack data since the more complicated function did not significantly improve the fit. Tagging data from three cruises involving 8852 releases and 1777 returns were used to compute mortality rates during the time the fish were in the fishery. Two models were used in the analyses. The best estimates of the catchability coefficient (q) in the north and south were 8.4 X 10- 4 and 5.0 X 10- 5 respectively. The other loss rate (X), which included losses due to emigration, natural mortality and mortality due to carrying a tag, was 0.14 on an annual instantaneous basis for both areas. To detect the possible effect of fishing on abundance and total yield, the relation between abundance and effort and between total catch and effort was examined. It was found that at levels of intensity observed in the fishery, fishing does not appear to have had any measurable effect on the stocks. It was concluded therefore that the total catch could probably be increased by substantially increasing total effort beyond the present level, and that the fluctuations in abundance are fishery-independent. The estimates of growth, mortality and fishing effort were used to compute yield-per-recruitment isopleths for skipjack in both the northern and southern areas. For a size at first entry of about 425 mm, the yield per recruitment was calculated at 3 pounds in the north and 1.5 pounds in the south. In both areas it would be possible to increase the yield per recruitment by increasing fishing effort. It was not possible to assess potential production of the skipjack stocks fished in the eastern Pacific, except to note that the fishery had not affected their abundance and that they were certainly under-exploited. It was concluded that the northern and southern stocks could support increased harvests, especially the latter. SPANISH: Las capturas de atún barrilete que sostienen las pesquerías principales de la parte occidental, central y oriental del Océano Pacífico han aumentado en los últimos años; así que es importante examinar la dinámica de la pesquería para determinar el efecto que pueda tener sobre la abundancia de los stocks. Se desarrolló un modelo hipotético, lineal para standardizar el esfuerzo de pesca a un solo tamaño de barco y tipo de arte. Luego se usó el esfuerzo standardizado para computar un índice de la abundancia que pueda dar razón de la variabilidad estacional en el área de pesca. Los índices de la abundancia variaron mucho de un año a otro tanto en el área septentrional como en el área meridional de la pesquería, pero indicaron una abundancia generalmente superior en el sur. Se emplearon los datos de 438 peces marcados y recuperados en el Océano Pacífico oriental para computar las curvas de crecimiento. Una técnica de mínimos cuadrados fue usada para estimar los parámetros de la función de crecimiento de van Bertalanffy. Se hicieron dos estimativos de los parámetros mediante el análisis de los mismos datos, de diferente manera. Para el primer juego de estimativos, K=0.819 sobre una base anual instantánea y L∞=729 mm; para el segundo, K=0.431 y L∞=881. Estos se correlacionaron bien con los estimativos obtenidos usando la función de crecimiento de Chapman-Richards, que incluye la de von Bertalanffy como un caso especial. Se decidió que la última función proveía un ajuste empírico, adecuado a los datos del barrilete, ya que la función más complicada no mejoró significativamente el ajuste. Los datos de marcación de tres cruceros incluyendo 8852 liberaciones y 1777 retornos, fueron usados para computar las tasas de mortalidad durante el tiempo en que los peces estuvieron en la pesquería. Se usaron dos modelos en los análisis. Los mejores estimativos del coeficiente de capturabilidad (q) en el norte y en el sur fueron 8.4 X 10-4 y 5.0 X 10-5 , respectivamente. La otra tasa de pérdida (X), la cual incluyó pérdidas debidas a la emigración, mortalidad natural y mortalidad debida a llevar una marca, fue 0.14 sobre una base anual instantánea para las dos áreas. Con el fin de descubrir el efecto que posiblemente pueda tener la pesca sobre la abundancia y el rendimiento total, se examinó la relación entre la abundancia y el esfuerzo y entre la captura total y el esfuerzo. Se encontró que a los niveles de la intensidad observada en la pesquería, la pesca no parece haber tenido ningún efecto perceptible en los stocks. Por lo tanto se decidió que mediante un aumento substancial del esfuerzo total, más allá del nivel actual, la captura total probablemente podría aumentarse, y que las fluctuaciones de la abundancia son independientes de la pesquería. Los estimativos del crecimiento, mortalidad y esfuerzo de pesca fueron usados para computar las isopletas del rendimiento por recluta del barrilete, tanto en las áreas del norte como del sur. Para una talla de primera entrada de unos 425 mm, el rendimiento por recluta fue calculado en 3 libras en el norte y 1.5 libras en el sur. En ambas áreas sería posible aumentar el rendimiento por recluta mediante un aumento del esfuerzo de pesca. No fue posible determinar la producción potencial de los stocks del barrilete pescado en el Pacífico oriental, excepto para observar que la pesquería no ha afectado su abundancia y que ciertamente se encuentran subexplotados. Se concluyó que los stocks norte y sur pueden soportar un aumento en el rendimiento, especialmente este último. (PDF contains 274 pages.)
Resumo:
122 p.
Resumo:
We report on high-frequency (300-700 GHz) ferromagnetic resonance (HF-FMR) measurements on cobalt superparamagnetic particles with strong uniaxial effective anisotropy. We derive the dynamical susceptibility of the system on the basis of an independent-grain model by using a rectangular approach. Numerical simulations give typical line shapes depending on the anisotropy, the gyromagnetic ratio, and the damping constant. HF-FMR experiments have been performed on two systems of ultrafine cobalt particles of different sizes with a mean number of atoms per particles of 150 +/- 20 and 310 +/- 20. In both systems, the magnetic anisotropy is found to be enhanced compared to the bulk value, and increases as the particle size decreases, in accordance with previous determinations from magnetization measurements. Although no size effect has been observed on the gyromagnetic ratio, the transverse relaxation time is two orders of magnitude smaller than the bulk value indicating strong damping effects, possibly originating from surface spin disorders.
Resumo:
The ratios R-k1 of k-fold to single ionization of the target atom with simultaneous one-electron capture by the projectile have been measured for 15-480 keV/u (nu(p) = 0.8-4.4 a.u.) collisions of Cq+, Oq+ (q=1-4) with Ar, using time-of-flight techniques which allowed the simultaneous identification of the final charge state of both the low-velocity recoil ion and the high-velocity projectile for each collision event. The present ratios are similar to those for He+ and He2+ ion impact. The energy dependence of R-k1 shows a maximum at a certain energy, E-max. which approximately conforms to the q(1/2)-dependence scaling. For a fixed projectile state, the ratios R-k1 also vary strongly with outgoing reaction channels. The general behavior of the measured data can be qualitatively analyzed by a simple impact-parameter, independent-electron model. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The kinesin-like factor 1 B (KIF1B) gene plays an important role in the process of apoptosis and the transformation and progression of malignant cells. Genetic variations in KIF1B may contribute to risk of epithelial ovarian cancer (EOC). In this study of 1,324 EOC patients and 1,386 cancer-free female controls, we investigated associations between two potentially functional single nucleotide polymorphisms in KIF1B and EOC risk by the conditional logistic regression analysis. General linear regression model was used to evaluate the correlation between the number of variant alleles and KIF1B mRNA expression levels. We found that the rs17401966 variant AG/GG genotypes were significantly associated with a decreased risk of EOC (adjusted odds ratio (OR) = 0.81, 95 % confidence interval (CI) = 0.68-0.97), compared with the AA genotype, but no associations were observed for rs1002076. Women who carried both rs17401966 AG/GG and rs1002076 AG/AA genotypes of KIF1B had a 0.82-fold decreased risk (adjusted 95 % CI = 0.69-0.97), compared with others. Additionally, there was no evidence of possible interactions between about-mentioned co-variants. Further genotype-phenotype correlation analysis indicated that the number of rs17401966 variant G allele was significantly associated with KIF1B mRNA expression levels (P for GLM = 0.003 and 0.001 in all and Chinese subjects, respectively), with GG carriers having the lowest level of KIF1B mRNA expression. Taken together, the rs17401966 polymorphism likely regulates KIF1B mRNA expression and thus may be associated with EOC risk in Eastern Chinese women. Larger, independent studies are warranted to validate our findings.
Resumo:
Environmental contamination and climate changes constitute two of the most serious problems affecting soil ecosystems in agricultural fields. Agriculture is nowadays a highly optimized process that strongly relies on the application of multiple pesticides to reduce losses and increase yield production. Although constituting, per se, a serious problem to soil biota, pesticide mixtures can assume an even higher relevance in a context of unfavourable environmental conditions. Surprisingly, frameworks currently established for environmental risk assessments keep not considering environmental stressors, such as temperature, soil moisture or UV radiation, as factors liable to influence the susceptibility of organisms to pesticides, or pesticide mixtures, which is raising increasing apprehension regarding their adequacy to actually estimate the risks posed by these compounds to the environment. Albeit the higher attention received on the last few years, the influence of environmental stressors on the behaviour and toxicity of chemical mixtures remains still poorly understood. Aiming to contribute for this discussion, the main goal of the present thesis was to evaluate the single and joint effects of natural stressors and pesticides to the terrestrial isopod Porcellionides pruinosus. The first approach consisted on evaluating the effects of several abiotic factors (temperature, soil moisture and UV radiation) on the performance of P. pruinosus using several endpoints: survival, feeding parameters, locomotor activity and avoidance behaviour. Results showed that these stressors might indeed affect P. pruinosus at relevant environmental conditions, thus suggesting the relevance of their consideration in ecotoxicological assays. At next, a multiple biomarker approach was used to have a closer insight into the pathways of damage of UV radiation and a broad spectrum of processes showed to be involved (i.e. oxidative stress, neurotoxicity, energy). Furthermore, UV effects showed to vary with the environment medium and growth-stage. A similar biomarker approach was employed to assess the single and joint effects of the pesticides chlorpyrifos and mancozeb to P. pruinosus. Energy-related biomarkers showed to be the most differentiating parameters since age-classes seemed to respond differently to contamination stress and to have different metabolic costs associated. Finally, the influence of temperature and soil moisture on the toxicity of pesticide mixtures was evaluated using survival and feeding parameters as endpoints. Pesticide-induced mortality was found to be oppositely affected by temperature, either in single or mixture treatments. Whereas chlorpyrifos acute toxicity was raised under higher temperatures the toxicity of mancozeb was more prominent at lower temperatures. By the opposite, soil moisture showed no effects on the pesticide-induced mortality of isopods. Contrary to survival, both temperature and soil moisture showed to interact with pesticides to influence isopods’ feeding parameters. Nonetheless, was however the most common pattern. In brief, findings reported on this thesis demonstrated why the negligence of natural stressors, or multiple stressors in general, is not a good solution for risk assessment frameworks.
Resumo:
Tese de doutoramento, Estudos de Literatura e de Cultura (Teoria da Literatura), Universidade de Lisboa, Faculdade de Letras, 2014
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
This paper tests the predictions of the Barro-Gordon model using US data on inflation and unemployment. To that end, it constructs a general game-theoretical model with asymmetric preferences that nests the Barro-Gordon model and a version of Cukierman’s model as special cases. Likelihood Ratio tests indicate that the restriction imposed by the Barro-Gordon model is rejected by the data but the one imposed by the version of Cukierman’s model is not. Reduced-form estimates are consistent with the view that the Federal Reserve weights more heavily positive than negative unemployment deviations from the expected natural rate.
Resumo:
With the help of an illustrative general equilibrium (CGE) model of the Moroccan Economy, we test for the significance of simulation results in the case where the exact macromesure is not known with certainty. This is done by computing lower and upper bounds for the simulation resukts, given a priori probabilities attached to three possible closures (Classical, Johansen, Keynesian). Our Conclusion is that, when there is uncertainty on closures several endogenous changes lack significance, which, in turn, limit the use of the model for policy prescriptions.
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.