957 resultados para Minimum Variance Model
Resumo:
In the double-detonation scenario for Type Ia supernovae, it is suggested that a detonation initiates in a shell of helium-rich material accreted from a companion star by a sub-Chandrasekhar-mass white dwarf. This shell detonation drives a shock front into the carbon-oxygen white dwarf that triggers a secondary detonation in the core. The core detonation results in a complete disruption of the white dwarf. Earlier studies concluded that this scenario has difficulties in accounting for the observed properties of Type Ia supernovae since the explosion ejecta are surrounded by the products of explosive helium burning in the shell. Recently, however, it was proposed that detonations might be possible for much less massive helium shells than previously assumed (Bildsten et al.). Moreover, it was shown that even detonations of these minimum helium shell masses robustly trigger detonations of the carbon-oxygen core (Fink et al.). Therefore, it is possible that the impact of the helium layer on observables is less than previously thought. Here, we present time-dependent multi-wavelength radiative transfer calculations for models with minimum helium shell mass and derive synthetic observables for both the optical and ? -ray spectral regions. These differ strongly from those found in earlier simulations of sub-Chandrasekhar-mass explosions in which more massive helium shells were considered. Our models predict light curves that cover both the range of brightnesses and the rise and decline times of observed Type Ia supernovae. However, their colors and spectra do not match the observations. In particular, their B - V colors are generally too red. We show that this discrepancy is mainly due to the composition of the burning products of the helium shell of the Fink et al. models which contain significant amounts of titanium and chromium. Using a toy model, we also show that the burning products of the helium shell depend crucially on its initial composition. This leads us to conclude that good agreement between sub-Chandrasekhar-mass explosions and observed Type Ia supernovae may still be feasible but further study of the shell properties is required.
Resumo:
The Bi-directional Evolutionary Structural Optimisation (BESO) method is a numerical topology optimisation method developed for use in finite element analysis. This paper presents a particular application of the BESO method to optimise the energy absorbing capability of metallic structures. The optimisation objective is to evolve a structural geometry of minimum mass while ensuring that the kinetic energy of an impacting projectile is reduced to a level which prevents perforation. Individual elements in a finite element mesh are deleted when a prescribed damage criterion is exceeded. An energy absorbing structure subjected to projectile impact will fail once the level of damage results in a critical perforation size. It is therefore necessary to constrain an optimisation algorithm from producing such candidate solutions. An algorithm to detect perforation was implemented within a BESO framework which incorporated a ductile material damage model.
Resumo:
Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.
Resumo:
1. Quantitative reconstruction of past vegetation distribution and abundance from sedimentary pollen records provides an important baseline for understanding long term ecosystem dynamics and for the calibration of earth system process models such as regional-scale climate models, widely used to predict future environmental change. Most current approaches assume that the amount of pollen produced by each vegetation type, usually expressed as a relative pollen productivity term, is constant in space and time.
2. Estimates of relative pollen productivity can be extracted from extended R-value analysis (Parsons and Prentice, 1981) using comparisons between pollen assemblages deposited into sedimentary contexts, such as moss polsters, and measurements of the present day vegetation cover around the sampled location. Vegetation survey method has been shown to have a profound effect on estimates of model parameters (Bunting and Hjelle, 2010), therefore a standard method is an essential pre-requisite for testing some of the key assumptions of pollen-based reconstruction of past vegetation; such as the assumption that relative pollen productivity is effectively constant in space and time within a region or biome.
3. This paper systematically reviews the assumptions and methodology underlying current models of pollen dispersal and deposition, and thereby identifies the key characteristics of an effective vegetation survey method for estimating relative pollen productivity in a range of landscape contexts.
4. It then presents the methodology used in a current research project, developed during a practitioner workshop. The method selected is pragmatic, designed to be replicable by different research groups, usable in a wide range of habitats, and requiring minimum effort to collect adequate data for model calibration rather than representing some ideal or required approach. Using this common methodology will allow project members to collect multiple measurements of relative pollen productivity for major plant taxa from several northern European locations in order to test the assumption of uniformity of these values within the climatic range of the main taxa recorded in pollen records from the region.
Resumo:
BACKGROUND: The transtheoretical model has been successful in promoting health behavior change in general and clinical populations. However, there is little knowledge about the application of the transtheoretical model to explain physical activity behavior in individuals with non-cystic fibrosis bronchiectasis. The aim was to examine patterns of (1) physical activity and (2) mediators of behavior change (self-efficacy, decisional balance, and processes of change) across stages of change in individuals with non-cystic fibrosis bronchiectasis.
METHODS: Fifty-five subjects with non-cystic fibrosis bronchiectasis (mean age ± SD = 63 ± 10 y) had physical activity assessed over 7 d using an accelerometer. Each component of the transtheoretical model was assessed using validated questionnaires. Subjects were divided into groups depending on stage of change: Group 1 (pre-contemplation and contemplation; n = 10), Group 2 (preparation; n = 20), and Group 3 (action and maintenance; n = 25). Statistical analyses included one-way analysis of variance and Tukey-Kramer post hoc tests.
RESULTS: Physical activity variables were significantly (P < .05) higher in Group 3 (action and maintenance) compared with Group 2 (preparation) and Group 1 (pre-contemplation and contemplation). For self-efficacy, there were no significant differences between groups for mean scores (P = .14). Decisional balance cons (barriers to being physically active) were significantly lower in Group 3 versus Group 2 (P = .032). For processes of change, substituting alternatives (substituting inactive options for active options) was significantly higher in Group 3 versus Group 1 (P = .01), and enlisting social support (seeking out social support to increase and maintain physical activity) was significantly lower in Group 3 versus Group 2 (P = .038).
CONCLUSIONS: The pattern of physical activity across stages of change is consistent with the theoretical predictions of the transtheoretical model. Constructs of the transtheoretical model that appear to be important at different stages of change include decisional balance cons, substituting alternatives, and enlisting social support. This study provides support to explore transtheoretical model-based physical activity interventions in individuals with non-cystic fibrosis bronchiectasis.
Resumo:
This paper investigates the potential for using the windowed variance of the received signal strength to select from a set of predetermined channel models for a wireless ranging or localization system. An 868 MHz based measurement system was used to characterize the received signal strength (RSS) of the off-body link formed between two wireless nodes attached to either side of a human thorax and six base stations situated in the local surroundings.
Resumo:
Beta diversity quantifies spatial and/or temporal variation in species composition. It is comprised of two distinct components, species replacement and nestedness, which derive from opposing ecological processes. Using Scotland as a case study and a β-diversity partitioning framework, we investigate temporal replacement and nestedness patterns of coastal grassland species over a 34-yr time period. We aim to 1) understand the influence of two potentially pivotal processes (climate and land-use changes) on landscape-scale (5 × 5 km) temporal replacement and nestedness patterns, and 2) investigate whether patterns from one β-diversity component can mask observable patterns in the other.
We summarised key aspects of climate driven macro-ecological variation as measures of variance, long-term trends, between-year similarity and extremes, for three important climatic predictors (minimum temperature, water-balance and growing degree-days). Shifts in landscape-scale heterogeneity, a proxy of land-use change, was summarised as a spatial multiple-site dissimilarity measure. Together, these climatic and spatial predictors were used in a multi-model inference framework to gauge the relative contribution of each on temporal replacement and nestedness patterns.
Temporal β-diversity patterns were reasonably well explained by climate change but weakly explained by changes in landscape-scale heterogeneity. Climate was shown to have a greater influence on temporal nestedness than replacement patterns over our study period, linking nestedness patterns, as a result of imbalanced gains and losses, to climatic warming and extremes respectively. Important climatic predictors (i.e. growing degree-days) of temporal β-diversity were also identified, and contrasting patterns between the two β-diversity components revealed.
Results suggest climate influences plant species recruitment and establishment processes of Scotland's coastal grasslands, and while species extinctions take time, they are likely to be facilitated by climatic perturbations. Our findings also highlight the importance of distinguishing between different components of β-diversity, disentangling contrasting patterns than can mask one another.
Resumo:
Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
BACKGROUND: Mesenchymal stem/stromal cells have unique properties favorable to their use in clinical practice and have been studied for cardiac repair. However, these cells are larger than coronary microvessels and there is controversy about the risk of embolization and microinfarctions, which could jeopardize the safety and efficacy of intracoronary route for their delivery. The index of microcirculatory resistance (IMR) is an invasive method for quantitatively assessing the coronary microcirculation status. OBJECTIVES: To examine heart microcirculation after intracoronary injection of mesenchymal stem/stromal cells with the index of microcirculatory resistance. METHODS: Healthy swine were randomized to receive by intracoronary route either 30x106 MSC or the same solution with no cells (1% human albumin/PBS) (placebo). Blinded operators took coronary pressure and flow measurements, prior to intracoronary infusion and at 5 and 30 minutes post-delivery. Coronary flow reserve (CFR) and the IMR were compared between groups. RESULTS: CFR and IMR were done with a variance within the 3 transit time measurements of 6% at rest and 11% at maximal hyperemia. After intracoronary infusion there were no significant differences in CFR. The IMR was significantly higher in MSC-injected animals (at 30 minutes, 14.2U vs. 8.8U, p = 0.02) and intragroup analysis showed a significant increase of 112% from baseline to 30 minutes after cell infusion, although no electrocardiographic changes or clinical deterioration were noted. CONCLUSION: Overall, this study provides definitive evidence of microcirculatory disruption upon intracoronary administration of mesenchymal stem/stromal cells, in a large animal model closely resembling human cardiac physiology, function and anatomy.
Resumo:
The future of health care delivery is becoming more citizen-centred, as today’s user is more active, better informed and more demanding. The European Commission is promoting online health services and, therefore, member states will need to boost deployment and use of online services. This makes e-health adoption an important field to be studied and understood. This study applied the extended unified theory of acceptance and usage technology (UTAUT2) to explain patients’ individual adoption of e-health. An online questionnaire was administrated Portugal using mostly the same instrument used in UTAUT2 adapted to e-health context. We collected 386 valid answers. Performance expectancy, effort expectancy, social influence, and habit had the most significant explanatory power over behavioural intention and habit and behavioural intention over technology use. The model explained 52% of the variance in behavioural intention and 32% of the variance in technology use. Our research helps to understand the desired technology characteristics of ehealth. By testing an information technology acceptance model, we are able to determine what is more valued by patients when it comes to deciding whether to adopt e-health systems or not.
Resumo:
years 8 months) and 24 older (M == 7 years 4 months) children. A Monitoring Process Model (MPM) was developed and tested in order to ascertain at which component process ofthe MPM age differences would emerge. The MPM had four components: (1) assessment; (2) evaluation; (3) planning; and (4) behavioural control. The MPM was assessed directly using a referential communication task in which the children were asked to make a series of five Lego buildings (a baseline condition and one building for each MPM component). Children listened to instructions from one experimenter while a second experimenter in the room (a confederate) intetjected varying levels ofverbal feedback in order to assist the children and control the component ofthe MPM. This design allowed us to determine at which "stage" ofprocessing children would most likely have difficulty monitoring themselves in this social-cognitive task. Developmental differences were obselVed for the evaluation, planning and behavioural control components suggesting that older children were able to be more successful with the more explicit metacomponents. Interestingly, however, there was no age difference in terms ofLego task success in the baseline condition suggesting that without the intelVention ofthe confederate younger children monitored the task about as well as older children. This pattern ofresults indicates that the younger children were disrupted by the feedback rather than helped. On the other hand, the older children were able to incorporate the feedback offered by the confederate into a plan ofaction. Another aim ofthis study was to assess similar processing components to those investigated by the MPM Lego task in a more naturalistic observation. Together the use ofthe Lego Task ( a social cognitive task) and the naturalistic social interaction allowed for the appraisal of cross-domain continuities and discontinuities in monitoring behaviours. In this vein, analyses were undertaken in order to ascertain whether or not successful performance in the MPM Lego Task would predict cross-domain competence in the more naturalistic social interchange. Indeed, success in the two latter components ofthe MPM (planning and behavioural control) was related to overall competence in the naturalistic task. However, this cross-domain prediction was not evident for all levels ofthe naturalistic interchange suggesting that the nature ofthe feedback a child receives is an important determinant ofresponse competency. Individual difference measures reflecting the children's general cognitive capacity (Working Memory and Digit Span) and verbal ability (vocabulary) were also taken in an effort to account for more variance in the prediction oftask success. However, these individual difference measures did not serve to enhance the prediction oftask performance in either the Lego Task or the naturalistic task. Similarly, parental responses to questionnaires pertaining to their child's temperament and social experience also failed to increase prediction oftask performance. On-line measures ofthe children's engagement, positive affect and anxiety also failed to predict competence ratings.
Resumo:
We examine the relationship between the risk premium on the S&P 500 index return and its conditional variance. We use the SMEGARCH - Semiparametric-Mean EGARCH - model in which the conditional variance process is EGARCH while the conditional mean is an arbitrary function of the conditional variance. For monthly S&P 500 excess returns, the relationship between the two moments that we uncover is nonlinear and nonmonotonic. Moreover, we find considerable persistence in the conditional variance as well as a leverage effect, as documented by others. Moreover, the shape of these relationships seems to be relatively stable over time.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
"Thèse présentée à la Faculté des études supérieures en vue de l'obtention du grade de Docteur en droit (L.L.D)"