924 resultados para Distributed process model
Resumo:
The integrity of the cornea, the most anterior part of the eye, is indispensable for vision. Forty-five million individuals worldwide are bilaterally blind and another 135 million have severely impaired vision in both eyes because of loss of corneal transparency; treatments range from local medications to corneal transplants, and more recently to stem cell therapy. The corneal epithelium is a squamous epithelium that is constantly renewing, with a vertical turnover of 7 to 14 days in many mammals. Identification of slow cycling cells (label-retaining cells) in the limbus of the mouse has led to the notion that the limbus is the niche for the stem cells responsible for the long-term renewal of the cornea; hence, the corneal epithelium is supposedly renewed by cells generated at and migrating from the limbus, in marked opposition to other squamous epithelia in which each resident stem cell has in charge a limited area of epithelium. Here we show that the corneal epithelium of the mouse can be serially transplanted, is self-maintained and contains oligopotent stem cells with the capacity to generate goblet cells if provided with a conjunctival environment. Furthermore, the entire ocular surface of the pig, including the cornea, contains oligopotent stem cells (holoclones) with the capacity to generate individual colonies of corneal and conjunctival cells. Therefore, the limbus is not the only niche for corneal stem cells and corneal renewal is not different from other squamous epithelia. We propose a model that unifies our observations with the literature and explains why the limbal region is enriched in stem cells.
Resumo:
The ability of tumor cells to leave a primary tumor, to disseminate through the body, and to ultimately seed new secondary tumors is universally agreed to be the basis for metastasis formation. An accurate description of the cellular and molecular mechanisms that underlie this multistep process would greatly facilitate the rational development of therapies that effectively allow metastatic disease to be controlled and treated. A number of disparate and sometimes conflicting hypotheses and models have been suggested to explain various aspects of the process, and no single concept explains the mechanism of metastasis in its entirety or encompasses all observations and experimental findings. The exciting progress made in metastasis research in recent years has refined existing ideas, as well as giving rise to new ones. In this review we survey some of the main theories that currently exist in the field, and show that significant convergence is emerging, allowing a synthesis of several models to give a more comprehensive overview of the process of metastasis. As a result we postulate a stromal progression model of metastasis. In this model, progressive modification of the tumor microenvironment is equally as important as genetic and epigenetic changes in tumor cells during primary tumor progression. Mutual regulatory interactions between stroma and tumor cells modify the stemness of the cells that drive tumor growth, in a manner that involves epithelial-mesenchymal and mesenchymal-epithelial-like transitions. Similar interactions need to be recapitulated at secondary sites for metastases to grow. Early disseminating tumor cells can progress at the secondary site in parallel to the primary tumor, both in terms of genetic changes, as well as progressive development of a metastatic stroma. Although this model brings together many ideas in the field, there remain nevertheless a number of major open questions, underscoring the need for further research to fully understand metastasis, and thereby identify new and effective ways of treating metastatic disease.
Resumo:
We derive an international asset pricing model that assumes local investorshave preferences of the type "keeping up with the Joneses." In aninternational setting investors compare their current wealth with that oftheir peers who live in the same country. In the process of inferring thecountry's average wealth, investors incorporate information from the domesticmarket portfolio. In equilibrium, this gives rise to a multifactor CAPMwhere, together with the world market price of risk, there existscountry-speciffic prices of risk associated with deviations from thecountry's average wealth level. The model performs signifficantly better, interms of explaining cross-section of returns, than the international CAPM.Moreover, the results are robust, both for conditional and unconditionaltests, to the inclusion of currency risk, macroeconomic sources of risk andthe Fama and French HML factor.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
This paper presents a two--factor model of the term structure ofinterest rates. We assume that default free discount bond prices aredetermined by the time to maturity and two factors, the long--term interestrate and the spread (difference between the long--term rate and theshort--term (instantaneous) riskless rate). Assuming that both factorsfollow a joint Ornstein--Uhlenbeck process, a general bond pricing equationis derived. We obtain a closed--form expression for bond prices andexamine its implications for the term structure of interest rates. We alsoderive a closed--form solution for interest rate derivatives prices. Thisexpression is applied to price European options on discount bonds andmore complex types of options. Finally, empirical evidence of the model'sperformance is presented.
Resumo:
This paper argues that any specific utility or disutility for gamblingmust be excluded from expected utility because such a theory is consequentialwhile a pleasure or displeasure for gambling is a matter of process, notof consequences. A (dis)utility for gambling is modeled as a process utilitywhich monotonically combines with expected utility restricted to consequences.This allows for a process (dis)utility for gambling to be revealed. Asan illustration, the model shows how empirical observations in the Allaisparadox can reveal a process disutility of gambling. A more general modelof rational behavior combining processes and consequences is then proposedand discussed.
Resumo:
Accomplish high quality of final products in pharmaceutical industry is a challenge that requires the control and supervision of all the manufacturing steps. This request created the necessity of developing fast and accurate analytical methods. Near infrared spectroscopy together with chemometrics, fulfill this growing demand. The high speed providing relevant information and the versatility of its application to different types of samples lead these combined techniques as one of the most appropriated. This study is focused on the development of a calibration model able to determine amounts of API from industrial granulates using NIR, chemometrics and process spectra methodology.
Resumo:
The vulnerability of subpopulations of retinal neurons delineated by their content of cytoskeletal or calcium-binding proteins was evaluated in the retinas of cynomolgus monkeys in which glaucoma was produced with an argon laser. We quantitatively compared the number of neurons containing either neurofilament (NF) protein, parvalbumin, calbindin or calretinin immunoreactivity in central and peripheral portions of the nasal and temporal quadrants of the retina from glaucomatous and fellow non-glaucomatous eyes. There was no significant difference between the proportion of amacrine, horizontal and bipolar cells labeled with antibodies to the calcium-binding proteins comparing the two eyes. NF triplet immunoreactivity was present in a subpopulation of retinal ganglion cells, many of which, but not all, likely correspond to large ganglion cells that subserve the magnocellular visual pathway. Loss of NF protein-containing retinal ganglion cells was widespread throughout the central (59-77% loss) and peripheral (96-97%) nasal and temporal quadrants and was associated with the loss of NF-immunoreactive optic nerve fibers in the glaucomatous eyes. Comparison of counts of NF-immunoreactive neurons with total cell loss evaluated by Nissl staining indicated that NF protein-immunoreactive cells represent a large proportion of the cells that degenerate in the glaucomatous eyes, particularly in the peripheral regions of the retina. Such data may be useful in determining the cellular basis for sensitivity to this pathologic process and may also be helpful in the design of diagnostic tests that may be sensitive to the loss of the subset of NF-immunoreactive ganglion cells.
Resumo:
In the traditional actuarial risk model, if the surplus is negative, the company is ruined and has to go out of business. In this paper we distinguish between ruin (negative surplus) and bankruptcy (going out of business), where the probability of bankruptcy is a function of the level of negative surplus. The idea for this notion of bankruptcy comes from the observation that in some industries, companies can continue doing business even though they are technically ruined. Assuming that dividends can only be paid with a certain probability at each point of time, we derive closed-form formulas for the expected discounted dividends until bankruptcy under a barrier strategy. Subsequently, the optimal barrier is determined, and several explicit identities for the optimal value are found. The surplus process of the company is modeled by a Wiener process (Brownian motion).
Resumo:
[cat] Aquest article descriu el procés d'implantació d'un sistema de gestió de la documentació administrativa al Departament d'Universitats, Recerca i Societat de la Informació de la Generalitat de Catalunya (actualment Departament d'Innovació, Universitats i Empresa). El projecte, anomenat DursiGED, ha inclòs l'anàlisi, el desenvolupament i la implantació dels productes i serveis necessaris per gestionar la documentació electrònica i en suport paper de manera normalitzada i d'acord amb uns paràmetres de qualitat. L'article inclou la descripció de la planificació del projecte, el procés d'implantació i seguiment, i els productes i serveis que n'han resultat.
Resumo:
[cat] En aquest treball presentem un model per explicar el procés d’especialització vitícola assolit als municipis de la província de Barcelona, a mitjans del s. XIX,que cerca entendre com va sorgir històricament un avantatge comparatiu fruit d’un procés que esdevindria un dels punts de partida del procés d’industrialització a Catalunya. Els resultats confirmen els papers jugats pel impuls “Boserupià” de la població en un context d’intensificació de l’ús de la terra, i d’un impuls del mercat “Smithià” en un context d’expansió de la demanda per part de les economies atlàntiques. També es posa de manifest la importància de les dotacions agro-ecològiques i les condicions socioinstitucionals relacionades amb la desigualtat d’ingrés. La difusió de la vinya donà com a resultat unes comunitats rurals menys desiguals fins al 1820, tot i que aquesta desigualtat augmentà de nou a partir d'aleshores.
Resumo:
In studies of the natural history of HIV-1 infection, the time scale of primary interest is the time since infection. Unfortunately, this time is very often unknown for HIV infection and using the follow-up time instead of the time since infection is likely to provide biased results because of onset confounding. Laboratory markers such as the CD4 T-cell count carry important information concerning disease progression and can be used to predict the unknown date of infection. Previous work on this topic has made use of only one CD4 measurement or based the imputation on incident patients only. However, because of considerable intrinsic variability in CD4 levels and because incident cases are different from prevalent cases, back calculation based on only one CD4 determination per person or on characteristics of the incident sub-cohort may provide unreliable results. Therefore, we propose a methodology based on the repeated individual CD4 T-cells marker measurements that use both incident and prevalent cases to impute the unknown date of infection. Our approach uses joint modelling of the time since infection, the CD4 time path and the drop-out process. This methodology has been applied to estimate the CD4 slope and impute the unknown date of infection in HIV patients from the Swiss HIV Cohort Study. A procedure based on the comparison of different slope estimates is proposed to assess the goodness of fit of the imputation. Results of simulation studies indicated that the imputation procedure worked well, despite the intrinsic high volatility of the CD4 marker.
Resumo:
Mathematical models have great potential to support land use planning, with the goal of improving water and land quality. Before using a model, however, the model must demonstrate that it can correctly simulate the hydrological and erosive processes of a given site. The SWAT model (Soil and Water Assessment Tool) was developed in the United States to evaluate the effects of conservation agriculture on hydrological processes and water quality at the watershed scale. This model was initially proposed for use without calibration, which would eliminate the need for measured hydro-sedimentologic data. In this study, the SWAT model was evaluated in a small rural watershed (1.19 km²) located on the basalt slopes of the state of Rio Grande do Sul in southern Brazil, where farmers have been using cover crops associated with minimum tillage to control soil erosion. Values simulated by the model were compared with measured hydro-sedimentological data. Results for surface and total runoff on a daily basis were considered unsatisfactory (Nash-Sutcliffe efficiency coefficient - NSE < 0.5). However simulation results on monthly and annual scales were significantly better. With regard to the erosion process, the simulated sediment yields for all years of the study were unsatisfactory in comparison with the observed values on a daily and monthly basis (NSE values < -6), and overestimated the annual sediment yield by more than 100 %.
Resumo:
The development of side-branching in solidifying dendrites in a regime of large values of the Peclet number is studied by means of a phase-field model. We have compared our numerical results with experiments of the preceding paper and we obtain good qualitative agreement. The growth rate of each side branch shows a power-law behavior from the early stages of its life. From their birth, branches which finally succeed in the competition process of side-branching development have a greater growth exponent than branches which are stopped. Coarsening of branches is entirely defined by their geometrical position relative to their dominant neighbors. The winner branches escape from the diffusive field of the main dendrite and become independent dendrites.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.