906 resultados para General Linear Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To investigate the symptom burden experiences of individuals with inflammatory bowel disease (IBD). An explanatory sequential mixed methods study was conducted. A cross-sectional, correlational survey was first undertaken. Symptom burden was measured using a modified disease specific version of the Memorial Symptom Assessment Scale, which was administered to a consecutive sample of individuals with IBD (n = 247) at an IBD Outpatients department in one urban teaching hospital in Ireland. Disease activity was determined using clinical disease activity indices, which were completed by the consulting physician. A sequential qualitative, descriptive study was then conducted aimed at explaining noteworthy quantitative findings. A criterion-related purposeful sample of seven participants from the quantitative study was recruited. Semi-structured face to face interviews were conducted using an interview guide and data were analysed using content analysis. Findings revealed that participants experienced a median of 10 symptoms during the last week, however as many as 16 symptoms were experienced during active disease. The most burdensome symptoms were lack of energy, bowel urgency, diarrhoea, feeling bloated, flatulence and worry. Total symptom burden was found to be low with a mean score of 0.56 identified out of a possible range from 0 to 4. Participants with active disease (M = 0.81, SD = 0.48; n = 68) had almost double mean total symptom burden scores than participants with inactive disease (M = 0.46, SD = 0.43; n = 166) (p < 0.001). Mean total psychological symptom burden was found to be significantly greater than mean total physical symptom burden (rho = 0.73, n = 247, p < 0.001). Self-reported disease control, gender, number of flare ups in the last two years, and smoking status was found to be significant predictors of total symptom burden, with self-reported disease control identified as the strongest predictor. Qualitative data revealed tiredness, pain, bowel symptoms, worry and fear as being burdensome. Furthermore, symptom burden experiences were described in terms of its impact on restricting aspects of daily activities, which accumulated into restrictions on general life events. Psychological symptom burden was revealed as more problematic than physical symptom burden due to its constant nature, with physical and psychological symptoms described to occur in a cyclical manner. Participants revealed that disease control was evaluated not only in terms of symptoms, but also in terms of their abilities to control the impact of symptoms on their lives. This study highlights the considerable number of symptoms and the most burdensome symptoms experienced by individuals with IBD, both during active and inactive disease. This study has important implications on symptom assessment in terms of the need to encompass both physical and psychological symptoms. In addition, greater attention needs to be placed on psychological aspects of IBD care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Inclusive education is central to contemporary discourse internationally reflecting societies’ wider commitment to social inclusion. Education has witnessed transforming approaches that have created differing distributions of power, resource allocation and accountability. Multiple actors are being forced to consider changes to how key services and supports are organised. This research constitutes a case study situated within this broader social service dilemma of how to distribute finite resources equitably to meet individual need, while advancing inclusion. It focuses on the national directive with regard to inclusive educational practice for primary schools, Department of Education and Science Special Education Circular 02/05, which introduced the General Allocation Model (GAM) within the legislative context of the Education of Persons with Special Educational Needs (EPSEN) Act (Government of Ireland, 2004). This research could help to inform policy with ‘facts about what is happening on the ground’ (Quinn, 2013). Research Aims: The research set out to unearth the assumptions and definitions embedded within the policy document, to analyse how those who are at the coalface of policy, and who interface with multiple interests in primary schools, understand the GAM and respond to it, and to investigate its effects on students and their education. It examines student outcomes in the primary schools where the GAM was investigated. Methods and Sample The post-structural study acknowledges the importance of policy analysis which explicitly links the ‘bigger worlds’ of global and national policy contexts to the ‘smaller worlds’ of policies and practices within schools and classrooms. This study insists upon taking the detail seriously (Ozga, 1990). A mixed methods approach to data collection and analysis is applied. In order to secure the perspectives of key stakeholders, semi-structured interviews were conducted with primary school principals, class teachers and learning support/resource teachers (n=14) in three distinct mainstream, non-DEIS schools. Data from the schools and their environs provided a profile of students. The researcher then used the Pobal Maps Facility (available at www.pobal.ie) to identify the Small Area (SA) in which each student resides, and to assign values to each address based on the Pobal HP Deprivation Index (Haase and Pratschke, 2012). Analysis of the datasets, guided by the conceptual framework of the policy cycle (Ball, 1994), revealed a number of significant themes. Results: Data illustrate that the main model to support student need is withdrawal from the classroom under policy that espouses inclusion. Quantitative data, in particular, highlighted an association between segregated practice and lower socioeconomic status (LSES) backgrounds of students. Up to 83% of the students in special education programmes are from lower socio-economic status (LSES) backgrounds. In some schools 94% of students from LSES backgrounds are withdrawn from classrooms daily for special education. While the internal processes of schooling are not solely to blame for class inequalities, this study reveals the power of professionals to order children in school, which has implications for segregated special education practice. Such agency on the part of key actors in the context of practice relates to ‘local constructions of dis/ability’, which is influenced by teacher habitus (Bourdieu, 1984). The researcher contends that inclusive education has not resulted in positive outcomes for students from LSES backgrounds because it is built on faulty assumptions that focus on a psycho-medical perspective of dis/ability, that is, placement decisions do not consider the intersectionality of dis/ability with class or culture. This study argues that the student need for support is better understood as ‘home/school discontinuity’ not ‘disability’. Moreover, the study unearths the power of some parents to use social and cultural capital to ensure eligibility to enhanced resources. Therefore, a hierarchical system has developed in mainstream schools as a result of funding models to support need in inclusive settings. Furthermore, all schools in the study are ‘ordinary’ schools yet participants acknowledged that some schools are more ‘advantaged’, which may suggest that ‘ordinary’ schools serve to ‘bury class’ (Reay, 2010) as a key marker in allocating resources. The research suggests that general allocation models of funding to meet the needs of students demands a systematic approach grounded in reallocating funds from where they have less benefit to where they have more. The calculation of the composite Haase Value in respect of the student cohort in receipt of special education support adopted for this study could be usefully applied at a national level to ensure that the greatest level of support is targeted at greatest need. Conclusion: In summary, the study reveals that existing structures constrain and enable agents, whose interactions produce intended and unintended consequences. The study suggests that policy should be viewed as a continuous and evolving cycle (Ball, 1994) where actors in each of the social contexts have a shared responsibility in the evolution of education that is equitable, excellent and inclusive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we offer a new way of exploring relationships between three different dimensions of a business operation, namely the stage of business development, the methods of creativity and the major cultural values. Although separately, each of these has gained enormous attention from the management research community, evidenced by a large volume of research studies, there have been not many studies that attempt to describe the logic that connect these three important aspects of a business; let alone empirical evidences that support any significant relationships among these variables. The paper also provides a data set and an empirical investigation on that data set, using a categorical data analysis, to conclude that examinations of these possible relationships are meaningful and possible for seemingly unquantifiable information. The results also show that the most significant category among all creativity methods employed in Vietnamese enterprises is the “creative disciplines” rule in the “entrepreneurial phase,” while in general creative disciplines have played a critical role in explaining the structure of our data sample, for both stages of development in our consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantification of protein-ligand interactions is essential for systems biology, drug discovery, and bioengineering. Ligand-induced changes in protein thermal stability provide a general, quantifiable signature of binding and may be monitored with dyes such as Sypro Orange (SO), which increase their fluorescence emission intensities upon interaction with the unfolded protein. This method is an experimentally straightforward, economical, and high-throughput approach for observing thermal melts using commonly available real-time polymerase chain reaction instrumentation. However, quantitative analysis requires careful consideration of the dye-mediated reporting mechanism and the underlying thermodynamic model. We determine affinity constants by analysis of ligand-mediated shifts in melting-temperature midpoint values. Ligand affinity is determined in a ligand titration series from shifts in free energies of stability at a common reference temperature. Thermodynamic parameters are obtained by fitting the inverse first derivative of the experimental signal reporting on thermal denaturation with equations that incorporate linear or nonlinear baseline models. We apply these methods to fit protein melts monitored with SO that exhibit prominent nonlinear post-transition baselines. SO can perturb the equilibria on which it is reporting. We analyze cases in which the ligand binds to both the native and denatured state or to the native state only and cases in which protein:ligand stoichiometry needs to treated explicitly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting from first-principles calculations whether mixed metallic elements phase-separate or form ordered structures is a major challenge of current materials research. It can be partially addressed in cases where experiments suggest the underlying lattice is conserved, using cluster expansion (CE) and a variety of exhaustive evaluation or genetic search algorithms. Evolutionary algorithms have been recently introduced to search for stable off-lattice structures at fixed mixture compositions. The general off-lattice problem is still unsolved. We present an integrated approach of CE and high-throughput ab initio calculations (HT) applicable to the full range of compositions in binary systems where the constituent elements or the intermediate ordered structures have different lattice types. The HT method replaces the search algorithms by direct calculation of a moderate number of naturally occurring prototypes representing all crystal systems and guides CE calculations of derivative structures. This synergy achieves the precision of the CE and the guiding strengths of the HT. Its application to poorly characterized binary Hf systems, believed to be phase-separating, defines three classes of alloys where CE and HT complement each other to uncover new ordered structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulations of reaction processes in solution in general rely on the definition of a reaction coordinate and the determination of the thermodynamic changes of the system along the reaction coordinate. The reaction coordinate often is constituted of characteristic geometrical properties of the reactive solute species, while the contributions of solvent molecules are implicitly included in the thermodynamics of the solute degrees of freedoms. However, solvent dynamics can provide the driving force for the reaction process, and in such cases explicit description of the solvent contribution in the free energy of the reaction process becomes necessary. We report here a method that can be used to analyze the solvent contributions to the reaction activation free energies from the combined QM/MM minimum free-energy path simulations. The method was applied to the self-exchange S(N)2 reaction of CH(3)Cl + Cl(-), showing that the importance of solvent-solute interactions to the reaction process. The results were further discussed in the context of coupling between solvent and solute molecules in reaction processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the problem of supervised linear dimensionality reduction, taking an information-theoretic viewpoint. The linear projection matrix is designed by maximizing the mutual information between the projected signal and the class label. By harnessing a recent theoretical result on the gradient of mutual information, the above optimization problem can be solved directly using gradient descent, without requiring simplification of the objective function. Theoretical analysis and empirical comparison are made between the proposed method and two closely related methods, and comparisons are also made with a method in which Rényi entropy is used to define the mutual information (in this case the gradient may be computed simply, under a special parameter setting). Relative to these alternative approaches, the proposed method achieves promising results on real datasets. Copyright 2012 by the author(s)/owner(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth and proliferation of invasive bacteria in engineered systems is an ongoing problem. While there are a variety of physical and chemical processes to remove and inactivate bacterial pathogens, there are many situations in which these tools are no longer effective or appropriate for the treatment of a microbial target. For example, certain strains of bacteria are becoming resistant to commonly used disinfectants, such as chlorine and UV. Additionally, the overuse of antibiotics has contributed to the spread of antibiotic resistance, and there is concern that wastewater treatment processes are contributing to the spread of antibiotic resistant bacteria.

Due to the continually evolving nature of bacteria, it is difficult to develop methods for universal bacterial control in a wide range of engineered systems, as many of our treatment processes are static in nature. Still, invasive bacteria are present in many natural and engineered systems, where the application of broad acting disinfectants is impractical, because their use may inhibit the original desired bioprocesses. Therefore, to better control the growth of treatment resistant bacteria and to address limitations with the current disinfection processes, novel tools that are both specific and adaptable need to be developed and characterized.

In this dissertation, two possible biological disinfection processes were investigated for use in controlling invasive bacteria in engineered systems. First, antisense gene silencing, which is the specific use of oligonucleotides to silence gene expression, was investigated. This work was followed by the investigation of bacteriophages (phages), which are viruses that are specific to bacteria, in engineered systems.


For the antisense gene silencing work, a computational approach was used to quantify the number of off-targets and to determine the effects of off-targets in prokaryotic organisms. For the organisms of Escherichia coli K-12 MG1655 and Mycobacterium tuberculosis H37Rv the mean number of off-targets was found to be 15.0 + 13.2 and 38.2 + 61.4, respectively, which results in a reduction of greater than 90% of the effective oligonucleotide concentration. It was also demonstrated that there was a high variability in the number of off-targets over the length of a gene, but that on average, there was no general gene location that could be targeted to reduce off-targets. Therefore, this analysis needs to be performed for each gene in question. It was also demonstrated that the thermodynamic binding energy between the oligonucleotide and the mRNA accounted for 83% of the variation in the silencing efficiency, compared to the number of off-targets, which explained 43% of the variance of the silencing efficiency. This suggests that optimizing thermodynamic parameters must be prioritized over minimizing the number of off-targets. In conclusion for the antisense work, these results suggest that off-target hybrids can account for a greater than 90% reduction in the concentration of the silencing oligonucleotides, and that the effective concentration can be increased through the rational design of silencing targets by minimizing off-target hybrids.

Regarding the work with phages, the disinfection rates of bacteria in the presence of phages was determined. The disinfection rates of E. coli K12 MG1655 in the presence of coliphage Ec2 ranged up to 2 h-1, and were dependent on both the initial phage and bacterial concentrations. Increasing initial phage concentrations resulted in increasing disinfection rates, and generally, increasing initial bacterial concentrations resulted in increasing disinfection rates. However, disinfection rates were found to plateau at higher bacterial and phage concentrations. A multiple linear regression model was used to predict the disinfection rates as a function of the initial phage and bacterial concentrations, and this model was able to explain 93% of the variance in the disinfection rates. The disinfection rates were also modeled with a particle aggregation model. The results from these model simulations suggested that at lower phage and bacterial concentrations there are not enough collisions to support active disinfection rates, which therefore, limits the conditions and systems where phage based bacterial disinfection is possible. Additionally, the particle aggregation model over predicted the disinfection rates at higher phage and bacterial concentrations of 108 PFU/mL and 108 CFU/mL, suggesting other interactions were occurring at these higher concentrations. Overall, this work highlights the need for the development of alternative models to more accurately describe the dynamics of this system at a variety of phage and bacterial concentrations. Finally, the minimum required hydraulic residence time was calculated for a continuous stirred-tank reactor and a plug flow reactor (PFR) as a function of both the initial phage and bacterial concentrations, which suggested that phage treatment in a PFR is theoretically possible.

In addition to determining disinfection rates, the long-term bacterial growth inhibition potential was determined for a variety of phages with both Gram-negative and Gram-positive bacteria. It was determined, that on average, phages can be used to inhibit bacterial growth for up to 24 h, and that this effect was concentration dependent for various phages at specific time points. Additionally, it was found that a phage cocktail was no more effective at inhibiting bacterial growth over the long-term than the best performing phage in isolation.

Finally, for an industrial application, the use of phages to inhibit invasive Lactobacilli in ethanol fermentations was investigated. It was demonstrated that phage 8014-B2 can achieve a greater than 3-log inactivation of Lactobacillus plantarum during a 48 h fermentation. Additionally, it was shown that phages can be used to protect final product yields and maintain yeast viability. Through modeling the fermentation system with differential equations it was determined that there was a 10 h window in the beginning of the fermentation run, where the addition of phages can be used to protect final product yields, and after 20 h no additional benefit of the phage addition was observed.

In conclusion, this dissertation improved the current methods for designing antisense gene silencing targets for prokaryotic organisms, and characterized phages from an engineering perspective. First, the current design strategy for antisense targets in prokaryotic organisms was improved through the development of an algorithm that minimized the number of off-targets. For the phage work, a framework was developed to predict the disinfection rates in terms of the initial phage and bacterial concentrations. In addition, the long-term bacterial growth inhibition potential of multiple phages was determined for several bacteria. In regard to the phage application, phages were shown to protect both final product yields and yeast concentrations during fermentation. Taken together, this work suggests that the rational design of phage treatment is possible and further work is needed to expand on this foundation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Adenosine-induced transient flow arrest has been used to facilitate clip ligation of intracranial aneurysms. However, the starting dose that is most likely to produce an adequate duration of profound hypotension remains unclear. We reviewed our experience to determine the dose-response relationship and apparent perioperative safety profile of adenosine in intracranial aneurysm patients. METHODS: This case series describes 24 aneurysm clip ligation procedures performed under an anesthetic consisting of remifentanil, low-dose volatile anesthetic, and propofol in which adenosine was used. The report focuses on the doses administered; duration of systolic blood pressure <60 mm Hg (SBP(<60 mm Hg)); and any cardiovascular, neurologic, or pulmonary complications observed in the perioperative period. RESULTS: A median dose of 0.34 mg/kg ideal body weight (range: 0.29-0.44 mg/kg) resulted in a SBP(<60 mm Hg) for a median of 57 seconds (range: 26-105 seconds). There was a linear relationship between the log-transformed dose of adenosine and the duration of a SBP(<60 mm Hg) (R(2) = 0.38). Two patients developed transient, hemodynamically stable atrial fibrillation, 2 had postoperative troponin levels >0.03 ng/mL without any evidence of cardiac dysfunction, and 3 had postoperative neurologic changes. CONCLUSIONS: For intracranial aneurysms in which temporary occlusion is impractical or difficult, adenosine is capable of providing brief periods of profound systemic hypotension with low perioperative morbidity. On the basis of these data, a dose of 0.3 to 0.4 mg/kg ideal body weight may be the recommended starting dose to achieve approximately 45 seconds of profound systemic hypotension during a remifentanil/low-dose volatile anesthetic with propofol induced burst suppression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Adherence to glaucoma medications is essential for successful treatment of the disease but is complex and difficult for many of our patients. Health coaching has been used successfully in the treatment of other chronic diseases. This pilot study explores the use of health coaching for glaucoma care. METHODS: A mixed methods study design was used to assess the health coaching intervention for glaucoma patients. The health coaching intervention consisted of four to six health coaching sessions with a certified health coach via telephone. Quantitative measures included demographic and health information, adherence to glaucoma medications (using the visual analog adherence scale and medication event monitoring system), and an exit survey rating the experience. Qualitative measures included a precoaching health questionnaire, notes made by the coach during the intervention, and an exit interview with the subjects at the end of the study. RESULTS: Four glaucoma patients participated in the study; all derived benefits from the health coaching. Study subjects demonstrated increased glaucoma drop adherence in response to the coaching intervention, in both visual analog scale and medication event monitoring system. Study subjects' qualitative feedback reflected a perceived improvement in both eye and general health self-care. The subjects stated that they would recommend health coaching to friends or family members. CONCLUSION: Health coaching was helpful to the glaucoma patients in this study; it has the potential to improve glaucoma care and overall health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentamos algunos resultados de una investigación más amplia cuyo objetivo general es describir y caracterizar el razonamiento inductivo que utilizan estudiantes de tercero y cuarto de Secundaria al resolver tareas relacionadas con sucesiones lineales y cuadráticas (Cañadas, 2007). Identificamos diferencias en el empleo de algunos de los pasos considerados para la descripción del razonamiento inductivo en la resolución de dos de los seis problemas planteados a los estudiantes. Describimos estas diferencias y las analizamos en función de las características de los problemas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quasi-Newton methods are applied to solve interface problems which arise from domain decomposition methods. These interface problems are usually sparse systems of linear or nonlinear equations. We are interested in applying these methods to systems of linear equations where we are not able or willing to calculate the Jacobian matrices as well as to systems of nonlinear equations resulting from nonlinear elliptic problems in the context of domain decomposition. Suitability for parallel implementation of these algorithms on coarse-grained parallel computers is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density–dependent heterogeneity (HDD) to be distinguished from between-patch, host density–independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well–known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent.