951 resultados para Model for bringing into play


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most facility location decision models ignore the fact that for a facility to survive it needs a minimum demand level to cover costs. In this paper we present a decision model for a firm thatwishes to enter a spatial market where there are several competitors already located. This market is such that for each outlet there is a demand threshold level that has to be achievedin order to survive. The firm wishes to know where to locate itsoutlets so as to maximize its market share taking into account the threshold level. It may happen that due to this new entrance, some competitors will not be able to meet the threshold and therefore will disappear. A formulation is presented together with a heuristic solution method and computational experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forensic experts play a major role in the legal process as they offer professional expert opinion and evidence within the criminal justice system adjudicating on the innocence or alleged guilt of an accused person. In this respect, medico-legal examination is an essential part of the investigation process, determining in a scientific way the cause(s) and manner of unexpected and/or unnatural death or bringing clinical evidence in case of physical, psychological, or sexual abuse in living people. From a legal perspective, these types of investigation must meet international standards, i.e., it should be independent, effective, and prompt. Ideally, the investigations should be conducted by board-certified experts in forensic medicine, endowed with a solid experience in this field, without any hierarchical relationship with the prosecuting authorities and having access to appropriate facilities in order to provide forensic reports of high quality. In this respect, there is a need for any private or public national or international authority including non-governmental organizations seeking experts qualified in forensic medicine to have at disposal a list of specialists working in accordance with high standards of professional performance within forensic pathology services that have been successfully submitted to an official accreditation/certification process using valid and acceptable criteria. To reach this goal, the National Association of Medical Examiners (NAME) has elaborated an accreditation/certification checklist which should be served as decision-making support to assist inspectors appointed to evaluate applicants. In the same spirit than NAME Accreditation Standards, European Council of Legal Medicine (ECLM) board decided to set up an ad hoc working group with the mission to elaborate an accreditation/certification procedure similar to the NAME's one but taking into account the realities of forensic medicine practices in Europe and restricted to post-mortem investigations. This accreditation process applies to services and not to individual practitioners by emphasizing policies and procedures rather than professional performance. In addition, the standards to be complied with should be considered as the minimum standards needed to get the recognition of performing and reliable forensic pathology service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method for brain atlas deformation in the presence of large space-occupying tumors, based on an a priori model of lesion growth that assumes radial expansion of the lesion from its starting point. Our approach involves three steps. First, an affine registration brings the atlas and the patient into global correspondence. Then, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. The last step is the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. Results show that a good registration is performed and that the method can be applied to automatic segmentation of structures and substructures in brains with gross deformation, with important medical applications in neurosurgery, radiosurgery, and radiotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: A new caval tree system was designed for realistic in vitro simulation. The objective of our study was to assess cannula performance for virtually wall-less versus standard percutaneous thin-walled venous cannulas in a setting of venous collapse in case of negative pressure. METHODS: For a collapsible caval model, a very flexible plastic material was selected, and a model with nine afferent veins was designed according to the anatomy of the vena cava. A flow bench was built including a lower reservoir holding the caval tree, built by taking into account the main afferent vessels and their flow provided by a reservoir 6 cm above. A cannula was inserted in this caval tree and connected to a centrifugal pump that, in turn, was connected to a reservoir positioned 83 cm above the second lower reservoir (after-load = 60 mmHg). Using the same pre-load, the simulated venous drainage for cardiopulmonary bypass was realized using a 24 F wall-less cannula (Smartcanula) and 25 F percutaneous cannula (Biomedicus), and stepwise increased augmentation (1500 RPM, 2000 and 2500 RPM) of venous drainage. RESULTS: For the thin wall and the wall-less cannulas, 36 pairs of flow and pressure measurements were realized for three different RPM values. The mean Q-values at 1500, 2000 and 2500 RPM were: 3.98 ± 0.01, 6.27 ± 0.02 and 9.81 ± 0.02 l/min for the wall-less cannula (P <0.0001), versus 2.74 ± 0.02, 3.06 ± 0.05, 6.78 ± 0.02 l/min for the thin-wall cannula (P <0.0001). The corresponding inlet pressure values were: -8.88 ± 0.01, -23.69 ± 0.81 and -70.22 ± 0.18 mmHg for the wall-less cannula (P <0.0001), versus -36.69 ± 1.88, -80.85 ± 1.71 and -101.83 ± 0.45 mmHg for the thin-wall cannula (P <0.0001). The thin-wall cannula showed mean Q-values 37% less and mean P values 26% more when compared with the wall-less cannula (P <0.0001). CONCLUSIONS: Our in vitro water test was able to mimic a negative pressure situation, where the wall-less cannula design performs better compared with the traditional thin-wall cannula.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cet article présente un état des lieux des recherches menées selon le paradigme de « l'alliance familiale » sur le développement des interactions triadiques mère-père-enfant lors de la transition à la parentalité. Ces recherches ont montré tout d'abord que la qualité des interactions triadiques tend à être stable au cours des deux premières années de vie de l'enfant, et qu'elle peut être anticipée durant la grossesse par l'observation d'interactions dans une simulation de jeu triadique. Ensuite, elles ont montré qu'une altération de ces interactions a une influence sur le développement de l'enfant qui se manifeste tout au long des cinq premières années, tant au niveau affectif que cognitif (par exemple : la capacité d'attention triangulaire lors des premiers mois, ou le développement de la théorie de l'esprit et les difficultés de comportements à cinq ans). Cette influence s'exerce en plus de celle d'autres variables comme la relation d'attachement mère-enfant, ou la personnalité de l'enfant lui-même évaluée selon son tempérament. La triade constitue donc un contexte de développement en soi qui doit être pris en compte dans la prise en charge et l'intervention auprès de jeunes enfants.This paper presents the main results of researches on the development of mother-father-child triadic interactions during the transition to parenthood, according to the « family alliance » model. First, these researches have shown that the quality of triadic interactions tends to be stable through the first two years, and that it can be predicted during pregnancy by observation of a simulated triadic play. Then, they have shown that disturbances in triadic interactions have an impact on several affective and cognitive developmental outcomes for the child throughout the first five years (for example, the triangular attention capacity during the first months, or the development of theory of mind and externalized behaviors at age five). This impact is specific, and triadic interactions exert an influence on the development of the child over and above other variables like the mother-child attachment relationship, or the personality of the child assessed in terms of temperament. The triad constitutes then a context of development per se which has to be taken into account when working clinically with young children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Neolithic was marked by a transition from small and relatively egalitarian groups to much larger groups with increased stratification. But, the dynamics of this remain poorly understood. It is hard to see how despotism can arise without coercion, yet coercion could not easily have occurred in an egalitarian setting. Using a quantitative model of evolution in a patch-structured population, we demonstrate that the interaction between demographic and ecological factors can overcome this conundrum. We model the coevolution of individual preferences for hierarchy alongside the degree of despotism of leaders, and the dispersal preferences of followers. We show that voluntary leadership without coercion can evolve in small groups, when leaders help to solve coordination problems related to resource production. An example is coordinating construction of an irrigation system. Our model predicts that the transition to larger despotic groups will then occur when: (i) surplus resources lead to demographic expansion of groups, removing the viability of an acephalous niche in the same area and so locking individuals into hierarchy; (ii) high dispersal costs limit followers' ability to escape a despot. Empirical evidence suggests that these conditions were probably met, for the first time, during the subsistence intensification of the Neolithic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The complexity of the current business world is making corporate disclosure more and more important for information users. These users, including investors, financial analysts, and government authorities rely on the disclosed information to make their investment decisions, analyze and recommend shares, and to draft regulation policies. Moreover, the globalization of capital markets has raised difficulties for information users in understanding the differences incorporate disclosure across countries and across firms. Using a sample of 797 firms from 34 countries, this thesis advances the literature on disclosure by illustrating comprehensively the disclosure determinants originating at firm systems and national systems based on the multilevel latent variable approach. Under this approach, the overall variation associated with the firm-specific variables is decomposed into two parts, the within-country and the between-country part. Accordingly, the model estimates the latent association between corporate disclosure and information demand at two levels, the within-country and the between-country level. The results indicate that the variables originating from corporate systems are hierarchically correlated with those from the country environment. The information demand factor indicated by the number of exchanges listed and the number of analyst recommendations can significantly explain the variation of corporate disclosure for both "within" and "between" countries. The exogenous influences of firm fundamentals-firm size and performance-are exerted indirectly through the information demand factor. Specifically, if the between-country variation in firm variables is taken into account, only the variables of legal systems and economic growth keep significance in explaining the disclosure differences across countries. These findings strongly support the hypothesis that disclosure is a response to both corporate systems and national systems, but the influence of the latter on disclosure reflected significantly through that of the former. In addition, the results based on ADR (American Depositary Receipt) firms suggest that the globalization of capital markets is harmonizing the disclosure behavior of cross-boundary listed firms, but it cannot entirely eliminate the national features in disclosure and other firm-specific characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictive potential of six selected factors was assessed in 72 patients with primary myelodysplastic syndrome using univariate and multivariate logistic regression analysis of survival at 18 months. Factors were age (above median of 69 years), dysplastic features in the three myeloid bone marrow cell lineages, presence of chromosome defects, all metaphases abnormal, double or complex chromosome defects (C23), and a Bournemouth score of 2, 3, or 4 (B234). In the multivariate approach, B234 and C23 proved to be significantly associated with a reduction in the survival probability. The similarity of the regression coefficients associated with these two factors means that they have about the same weight. Consequently, the model was simplified by counting the number of factors (0, 1, or 2) present in each patient, thus generating a scoring system called the Lausanne-Bournemouth score (LB score). The LB score combines the well-recognized and easy-to-use Bournemouth score (B score) with the chromosome defect complexity, C23 constituting an additional indicator of patient outcome. The predicted risk of death within 18 months calculated from the model is as follows: 7.1% (confidence interval: 1.7-24.8) for patients with an LB score of 0, 60.1% (44.7-73.8) for an LB score of 1, and 96.8% (84.5-99.4) for an LB score of 2. The scoring system presented here has several interesting features. The LB score may improve the predictive value of the B score, as it is able to recognize two prognostic groups in the intermediate risk category of patients with B scores of 2 or 3. It has also the ability to identify two distinct prognostic subclasses among RAEB and possibly CMML patients. In addition to its above-described usefulness in the prognostic evaluation, the LB score may bring new insights into the understanding of evolution patterns in MDS. We used the combination of the B score and chromosome complexity to define four classes which may be considered four possible states of myelodysplasia and which describe two distinct evolutional pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of private gardens has increased in recent years, creating a more pleasant urban model, but not without having an environmental impact, including increased energy consumption, which is the focus of this study. The estimation of costs and energy consumption for the generic typology of private urban gardens is based on two simplifying assumptions: square geometry with surface areas from 25 to 500 m2 and hydraulic design with a single pipe. In total, eight sprinkler models have been considered, along with their possible working pressures, and 31 pumping units grouped into 5 series that adequately cover the range of required flow rates and pressures, resultin in 495 hydraulic designs repeated for two climatically different locations in the Spanish Mediterranean area (Girona and Elche). Mean total irrigation costs for the locality with lower water needs (Girona) and greater needs (Elche) were € 2,974 ha-¹ yr-¹ and € 3,383 ha-¹ yr-¹, respectively. Energy costs accounted for 11.4% of the total cost for the first location, and 23.0% for the second. While a suitable choice of the hydraulic elements of the setup is essential, as it may provide average energy savings of 77%, due to the low energy cost in relation to the cost of installation, the potential energy savings do not constitute a significant incentive for the irrigation system design. The low efficiency of the pumping units used in this type of garden is the biggest obstacle and constraint to achieving a high quality energy solution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Fibrotic changes are initiated early in acute respiratory distress syndrome. This may involve overproliferation of alveolar type II cells. In an animal model of acute respiratory distress syndrome, we have shown that the administration of an adenoviral vector overexpressing the 70-kd heat shock protein (AdHSP) limited pathophysiological changes. We hypothesized that this improvement may be modulated, in part, by an early AdHSP-induced attenuation of alveolar type II cell proliferation. DESIGN: Laboratory investigation. SETTING: Hadassah-Hebrew University and University of Pennsylvania animal laboratories. SUBJECTS: Sprague-Dawley Rats (250 g). INTERVENTIONS: Lung injury was induced in male Sprague-Dawley rats via cecal ligation and double puncture. At the time of cecal ligation and double puncture, we injected phosphate-buffered saline, AdHSP, or AdGFP (an adenoviral vector expressing the marker green fluorescent protein) into the trachea. Rats then received subcutaneous bromodeoxyuridine. In separate experiments, A549 cells were incubated with medium, AdHSP, or AdGFP. Some cells were also stimulated with tumor necrosis factor-alpha. After 48 hrs, cytosolic and nuclear proteins from rat lungs or cell cultures were isolated. These were subjected to immunoblotting, immunoprecipitation, electrophoretic mobility shift assay, fluorescent immunohistochemistry, and Northern blot analysis. MEASUREMENTS AND MAIN RESULTS: Alveolar type I cells were lost within 48 hrs of inducing acute respiratory distress syndrome. This was accompanied by alveolar type II cell proliferation. Treatment with AdHSP preserved alveolar type I cells and limited alveolar type II cell proliferation. Heat shock protein 70 prevented overexuberant cell division, in part, by inhibiting hyperphosphorylation of the regulatory retinoblastoma protein. This prevented retinoblastoma protein ubiquitination and degradation and, thus, stabilized the interaction of retinoblastoma protein with E2F1, a key cell division transcription factor. CONCLUSIONS: : Heat shock protein 70-induced attenuation of cell proliferation may be a useful strategy for limiting lung injury when treating acute respiratory distress syndrome if consistent in later time points.