874 resultados para Theory of Complex Socialization
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
This article has an immediate predecessor, upon which it is based and with which readers must necessarily be familiar: Towards a Theory of the Credit-Risk Balance Sheet (Vallverdú, Somoza and Moya, 2006). The Balance Sheet is conceptualised on the basis of the duality of a credit-based transaction; it deals with its theoretical foundations, providing evidence of a causal credit-risk duality, that is, a true causal relationship; its characteristics, properties and its static and dynamic characteristics are analyzed. This article, which provides a logical continuation to the previous one, studies the evolution of the structure of the Credit-Risk Balance Sheet as a consequence of a business¿s dynamics in the credit area. Given the Credit-Risk Balance Sheet of a company at any given time, it attempts to estimate, by means of sequential analysis, its structural evolution, showing its usefulness in the management and control of credit and risk. To do this, it bases itself, with the necessary adaptations, on the by-now classic works of Palomba and Cutolo. The establishment of the corresponding transformation matrices allows one to move from an initial balance sheet structure to a final, future one, to understand its credit-risk situation trends, as well as to make possible its monitoring and control, basic elements in providing support for risk management.
Resumo:
The general theory of nonlinear relaxation times is developed for the case of Gaussian colored noise. General expressions are obtained and applied to the study of the characteristic decay time of unstable states in different situations, including white and colored noise, with emphasis on the distributed initial conditions. Universal effects of the coupling between colored noise and random initial conditions are predicted.
Resumo:
A mathematical model that describes the behavior of low-resolution Fresnel lenses encoded in any low-resolution device (e.g., a spatial light modulator) is developed. The effects of low-resolution codification, such the appearance of new secondary lenses, are studied for a general case. General expressions for the phase of these lenses are developed, showing that each lens behaves as if it were encoded through all pixels of the low-resolution device. Simple expressions for the light distribution in the focal plane and its dependence on the encoded focal length are developed and commented on in detail. For a given codification device an optimum focal length is found for best lens performance. An optimization method for codification of a single lens with a short focal length is proposed.
Resumo:
A mathematical model describing the behavior of low-resolution Fresnel encoded lenses (LRFEL's) encoded in any low-resolution device (e.g., a spatial light modulator) has recently been developed. From this model, an LRFEL with a short focal length was optimized by our imposing the maximum intensity of light onto the optical axis. With this model, analytical expressions for the light-amplitude distribution, the diffraction efficiency, and the frequency response of the optimized LRFEL's are derived.
Resumo:
We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.
Resumo:
In this Contribution we show that a suitably defined nonequilibrium entropy of an N-body isolated system is not a constant of the motion, in general, and its variation is bounded, the bounds determined by the thermodynamic entropy, i.e., the equilibrium entropy. We define the nonequilibrium entropy as a convex functional of the set of n-particle reduced distribution functions (n ? N) generalizing the Gibbs fine-grained entropy formula. Additionally, as a consequence of our microscopic analysis we find that this nonequilibrium entropy behaves as a free entropic oscillator. In the approach to the equilibrium regime, we find relaxation equations of the Fokker-Planck type, particularly for the one-particle distribution function.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Application of semi-distributed hydrological models to large, heterogeneous watersheds deals with several problems. On one hand, the spatial and temporal variability in catchment features should be adequately represented in the model parameterization, while maintaining the model complexity in an acceptable level to take advantage of state-of-the-art calibration techniques. On the other hand, model complexity enhances uncertainty in adjusted model parameter values, therefore increasing uncertainty in the water routing across the watershed. This is critical for water quality applications, where not only streamflow, but also a reliable estimation of the surface versus subsurface contributions to the runoff is needed. In this study, we show how a regularized inversion procedure combined with a multiobjective function calibration strategy successfully solves the parameterization of a complex application of a water quality-oriented hydrological model. The final value of several optimized parameters showed significant and consistentdifferences across geological and landscape features. Although the number of optimized parameters was significantly increased by the spatial and temporal discretization of adjustable parameters, the uncertainty in water routing results remained at reasonable values. In addition, a stepwise numerical analysis showed that the effects on calibration performance due to inclusion of different data types in the objective function could be inextricably linked. Thus caution should be taken when adding or removing data from an aggregated objective function.
Resumo:
OBJECTIVES: Coarctation of the aorta is one of the most common congenital heart defects. Its diagnosis may be difficult in the presence of a patent ductus arteriosus, of other complex defects or of a poor echocardiographic window. We sought to demonstrate that the carotid-subclavian artery index (CSA index) and the isthmus-descending aorta ratio (I/D ratio), two recently described echocardiographic indexes, are effective in detection of isolated and complex aortic coarctations in children younger and older than 3 months of age. The CSA index is the ratio of the distal aortic arch diameter to the distance between the left carotid artery and the left subclavian artery. It is highly suggestive of a coarctation when it is <1.5. The I/D ratio defined as the diameter of the isthmus to the diameter of the descending aorta, suggests an aortic coarctation when it is less than 0.64. METHODS: This is a retrospective cohort study in a tertiary care children's hospital. Review of all echocardiograms in children aged 0-18 years with a diagnosis of coarctation seen at the author's institution between 1996 and 2006. An age- and sex-matched control group without coarctation was constituted. Offline echocardiographic measurements of the aortic arch were performed in order to calculate the CSA index and I/D ratio. RESULTS: Sixty-eight patients were included in the coarctation group, 24 in the control group. Patients with coarctation had a significantly lower CSA index (0.84+/-0.39 vs 2.65+/-0.82, p<0.0001) and I/D ratio (0.58+/-0.18 vs 0.98+/-0.19, p<0.0001) than patients in the control group. Associated cardiac defects and age of the child did not significantly alter the CSA index or the I/D ratio. CONCLUSIONS: A CSA index less than 1.5 is highly suggestive of coarctation independent of age and of the presence of other cardiac defects. I/D ratio alone is less specific than CSA alone at any age and for any associated cardiac lesion. The association of both indexes improves sensitivity and permits diagnosis of coarctation in all patients based solely on a bedside echocardiographic measurement.
Resumo:
Background Plant hormones play a pivotal role in several physiological processes during a plant's life cycle, from germination to senescence, and the determination of endogenous concentrations of hormones is essential to elucidate the role of a particular hormone in any physiological process. Availability of a sensitive and rapid method to quantify multiple classes of hormones simultaneously will greatly facilitate the investigation of signaling networks in controlling specific developmental pathways and physiological responses. Due to the presence of hormones at very low concentrations in plant tissues (10-9 M to 10-6 M) and their different chemistries, the development of a high-throughput and comprehensive method for the determination of hormones is challenging. Results The present work reports a rapid, specific and sensitive method using ultrahigh-performance liquid chromatography coupled to electrospray ionization tandem spectrometry (UPLC/ESI-MS/MS) to analyze quantitatively the major hormones found in plant tissues within six minutes, including auxins, cytokinins, gibberellins, abscisic acid, 1-amino-cyclopropane-1-carboxyic acid (the ethylene precursor), jasmonic acid and salicylic acid. Sample preparation, extraction procedures and UPLC-MS/MS conditions were optimized for the determination of all plant hormones and are summarized in a schematic extraction diagram for the analysis of small amounts of plant material without time-consuming additional steps such as purification, sample drying or re-suspension. Conclusions This new method is applicable to the analysis of dynamic changes in endogenous concentrations of hormones to study plant developmental processes or plant responses to biotic and abiotic stresses in complex tissues. An example is shown in which a hormone profiling is obtained from leaves of plants exposed to salt stress in the aromatic plant, Rosmarinus officinalis.
Resumo:
This paper evaluates the reception of Léon Walras' ideas in Russia before 1920. Despite an unfavourable institutional context, Walras was read by Russian economists. On the one hand, Bortkiewicz and Winiarski, who lived outside Russia and had the opportunity to meet and correspond with Walras, were first class readers and very good ambassadors for Walras' ideas, while on the other, the economists living in Russia were more selective in their readings. They restricted themselves to Walras' Elements of Pure Economics, in particular, its theory of exchange, while ignoring its theory of production. We introduce a cultural argument to explain their selective reading. JEL classification numbers: B 13, B 19.