990 resultados para LIKELIHOOD APPROACH
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
Recent intervention efforts in promoting positive identity in troubled adolescents have begun to draw on the potential for an integration of the self-construction and self-discovery perspectives in conceptualizing identity processes, as well as the integration of quantitative and qualitative data analytic strategies. This study reports an investigation of the Changing Lives Program (CLP), using an Outcome Mediation (OM) evaluation model, an integrated model for evaluating targets of intervention, while theoretically including a Self-Transformative Model of Identity Development (STM), a proposed integration of self-discovery and self-construction identity processes. This study also used a Relational Data Analysis (RDA) integration of quantitative and qualitative analysis strategies and a structural equation modeling approach (SEM), to construct and evaluate the hypothesized OM/STM model. The CLP is a community supported positive youth development intervention, targeting multi-problem youth in alternative high schools in the Miami Dade County Public Schools (M-DCPS). The 259 participants for this study were drawn from the CLP’s archival data file. The model evaluated in this study utilized three indices of core identity processes (1) personal expressiveness, (2) identity conflict resolution, and (3) informational identity style that were conceptualized as mediators of the effects of participation in the CLP on change in two qualitative outcome indices of participants’ sense of self and identity. Findings indicated the model fit the data (χ2 (10) = 3.638, p = .96; RMSEA = .00; CFI = 1.00; WRMR = .299). The pattern of findings supported the utilization of the STM in conceptualizing identity processes and provided support for the OM design. The findings also suggested the need for methods capable of detecting and rendering unique sample specific free response data to increase the likelihood of identifying emergent core developmental research concepts and constructs in studies of intervention/developmental change over time in ways not possible using fixed response methods alone.
Resumo:
The purpose of this mixed methods study was to understand physics Learning Assistants' (LAs) views on reflective teaching, expertise in teaching, and LA program teaching experience and to determine if views predicted level of reflection evident in writing. Interviews were conducted in Phase One, Q methodology was used in Phase Two, and level of reflection in participants' writing was assessed using a rubric based on Hatton and Smith's (1995) "Criteria for the Recognition of Evidence for Different Types of Reflective Writing" in Phase Three. Interview analysis revealed varying perspectives on content knowledge, pedagogical knowledge, and experience in relation to expertise in teaching. Participants revealed that they engaged in reflection on their teaching, believed reflection helps teachers improve, and found peer reflection beneficial. Participants believed teaching experience in the LA program provided preparation for teaching, but that more preparation was needed to teach. Three typologies emerged in Phase Two. Type One LAs found participation in the LA program rewarding and believed expertise in teaching does not require expertise in content or pedagogy, but it develops over time from reflection. Type Two LAs valued reflection, but not writing reflections, felt the LA program teaching experience helped them decide on non-teaching careers and helped them confront gaps in their physics knowledge. Type Three LAs valued reflection, believed expertise in content and pedagogy are necessary for expert teaching, and felt LA program teaching experience increased their likelihood of becoming teachers, but did not prepare them for teaching. Writing assignments submitted in Phase Three were categorized as 19% descriptive writing, 60% descriptive reflections, and 21% dialogic reflections. No assignments were categorized as critical reflection. Using ordinal logistic regression, typologies that emerged in Phase Two were not found to be predictors for the level of reflection evident in the writing assignments. In conclusion, viewpoints of physics LAs were revealed, typologies among them were discovered, and their writing gave evidence of their ability to reflect on teaching. These findings may benefit faculty and staff in the LA program by helping them better understand the views of physics LAs and how to assess their various forms of reflection.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
Estimates of HIV prevalence are important for policy in order to establish the health status of a country's population and to evaluate the effectiveness of population-based interventions and campaigns. However, participation rates in testing for surveillance conducted as part of household surveys, on which many of these estimates are based, can be low. HIV positive individuals may be less likely to participate because they fear disclosure, in which case estimates obtained using conventional approaches to deal with missing data, such as imputation-based methods, will be biased. We develop a Heckman-type simultaneous equation approach which accounts for non-ignorable selection, but unlike previous implementations, allows for spatial dependence and does not impose a homogeneous selection process on all respondents. In addition, our framework addresses the issue of separation, where for instance some factors are severely unbalanced and highly predictive of the response, which would ordinarily prevent model convergence. Estimation is carried out within a penalized likelihood framework where smoothing is achieved using a parametrization of the smoothing criterion which makes estimation more stable and efficient. We provide the software for straightforward implementation of the proposed approach, and apply our methodology to estimating national and sub-national HIV prevalence in Swaziland, Zimbabwe and Zambia.
Resumo:
Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.
Resumo:
Roads represent a new source of mortality due to animal-vehicle risk of collision threatening log-term populations’ viability. Risk of road-kill depends on species sensitivity to roads and their specific life-history traits. The risk of road mortality for each species depends on the characteristics of roads and bioecological characteristics of the species. In this study we intend to know the importance of climatic parameters (temperature and precipitation) together with traffic and life history traits and understand the role of drought in barn owl population viability, also affected by road mortality in three scenarios: high mobility, high population density and the combination of previous scenarios (mixed) (Manuscript). For the first objective we correlated the several parameters (climate, traffic and life history traits). We used the most correlated variables to build a predictive mixed model (GLMM) the influence of the same. Using a population model we evaluated barn owl population viability in all three scenarios. Model revealed precipitation, traffic and dispersal have negative relationship with road-kills, although the relationship was not significant. Scenarios showed different results, high mobility scenario showed greater population depletion, more fluctuations over time and greater risk of extinction. High population density scenario showed a more stable population with lower risk of extinction and mixed scenario showed similar results as first scenario. Climate seems to play an indirect role on barn owl road-kills, it may influence prey availability which influences barn owl reproductive success and activity. Also, high mobility scenario showed a greater negative impact on viability of populations which may affect their ability and resilience to other stochastic events. Future research should take in account climate and how it may influence species life cycles and activity periods for a more complete approach of road-kills. Also it is important to make the best mitigation decisions which might include improving prey quality habitat.
Resumo:
MATCH (Multidisciplinary Assessment of Technology Centre for Healthcare) is a new collaboration in the UK that aims to support the healthcare sector by creating methods to assess the value of medical devices from concept through to mature product. A major aim of MATCH is to encourage the inclusion of the user throughout the product lifecycle in order to achieve devices that truly meet the requirements of their users. A review of the published literature indicates that user requirements are mainly collected during the design and evaluation stage of the product lifecycle whilst other areas, including the concept stage, have less user involvement. Complementing the literature review is an in-depth consultation with the medical device industry, which has identified a number of barriers encountered by companies when attempting to capture user requirements. These will be addressed by a number of case study projects, performed in collaboration with our industrial partners, that will examine the application and utility of different approaches to collecting and analysing data on user requirements. MATCH is focused on providing advice to device developers on how to select and apply methods that have maximum theoretical strength, practical application, cost-effectiveness and likelihood of wide sector acceptance. Feedback will be sought in order to ensure that the needs of the diverse medical device sector are met.
Resumo:
This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.
Resumo:
Mobile and wireless networks have long exploited mobility predictions, focused on predicting the future location of given users, to perform more efficient network resource management. In this paper, we present a new approach in which we provide predictions as a probability distribution of the likelihood of moving to a set of future locations. This approach provides wireless services a greater amount of knowledge and enables them to perform more effectively. We present a framework for the evaluation of this new type of predictor, and develop 2 new predictors, HEM and G-Stat. We evaluate our predictors accuracy in predicting future cells for mobile users, using two large geolocation data sets, from MDC [11], [12] and Crawdad [13]. We show that our predictors can successfully predict with as low as an average 2.2% inaccuracy in certain scenarios.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
The objective of this work was to apply fuzzy majority multicriteria group decision?making to determine risk areas for foot?and?mouth disease (FMD) introduction along the border between Brazil and Paraguay. The study was conducted in three municipalities in the state of Mato Grosso do Sul, Brazil, located along the border with Paraguay. Four scenarios were built, applying the following linguistic quantifiers to describe risk factors: few, half, many, and most. The three criteria considered to be most likely to affect the vulnerability to introduction of FMD, according to experts? opinions, were: the introduction of animals in the farm, the distance from the border, and the type of property settlements. The resulting maps show a strong spatial heterogeneity in the risk of FMD introduction. The used methodology brings out a new approach that can be helpful to policy makers in the combat and eradication of FMD.
Resumo:
Xanthomonas citri subsp. citri (X. citri) is the causative agent of the citrus canker, a disease that affects several citrus plants in Brazil and across the world. Although many studies have demonstrated the importance of genes for infection and pathogenesis in this bacterium, there are no data related to phosphate uptake and assimilation pathways. To identify the proteins that are involved in the phosphate response, we performed a proteomic analysis of X. citri extracts after growth in three culture media with different phosphate concentrations. Using mass spectrometry and bioinformatics analysis, we showed that X. citri conserved orthologous genes from Pho regulon in Escherichia coli, including the two-component system PhoR/PhoB, ATP binding cassette (ABC transporter) Pst for phosphate uptake, and the alkaline phosphatase PhoA. Analysis performed under phosphate starvation provided evidence of the relevance of the Pst system for phosphate uptake, as well as both periplasmic binding proteins, PhoX and PstS, which were formed in high abundance. The results from this study are the first evidence of the Pho regulon activation in X. citri and bring new insights for studies related to the bacterial metabolism and physiology. Biological significance Using proteomics and bioinformatics analysis we showed for the first time that the phytopathogenic bacterium X. citri conserves a set of proteins that belong to the Pho regulon, which are induced during phosphate starvation. The most relevant in terms of conservation and up-regulation were the periplasmic-binding proteins PstS and PhoX from the ABC transporter PstSBAC for phosphate, the two-component system composed by PhoR/PhoB and the alkaline phosphatase PhoA.
Resumo:
In the current study, a new approach has been developed for correcting the effect that moisture reduction after virgin olive oil (VOO) filtration exerts on the apparent increase of the secoiridoid content by using an internal standard during extraction. Firstly, two main Spanish varieties (Picual and Hojiblanca) were submitted to industrial filtration of VOOs. Afterwards, the moisture content was determined in unfiltered and filtered VOOs, and liquid-liquid extraction of phenolic compounds was performed using different internal standards. The resulting extracts were analyzed by HPLC-ESI-TOF/MS, in order to gain maximum information concerning the phenolic profiles of the samples under study. The reduction effect of filtration on the moisture content, phenolic alcohols, and flavones was confirmed at the industrial scale. Oleuropein was chosen as internal standard and, for the first time, the apparent increase of secoiridoids in filtered VOO was corrected, using a correction coefficient (Cc) calculated from the variation of internal standard area in filtered and unfiltered VOO during extraction. This approach gave the real concentration of secoiridoids in filtered VOO, and clarified the effect of the filtration step on the phenolic fraction. This finding is of great importance for future studies that seek to quantify phenolic compounds in VOOs.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.