887 resultados para Analysis and statistical methods
Resumo:
We investigate the feasibility of simultaneous suppressing of the amplification noise and nonlinearity, representing the most fundamental limiting factors in modern optical communication. To accomplish this task we developed a general design optimisation technique, based on concepts of noise and nonlinearity management. We demonstrate the immense efficiency of the novel approach by applying it to a design optimisation of transmission lines with periodic dispersion compensation using Raman and hybrid Raman-EDFA amplification. Moreover, we showed, using nonlinearity management considerations, that the optimal performance in high bit-rate dispersion managed fibre systems with hybrid amplification is achieved for a certain amplifier spacing – which is different from commonly known optimal noise performance corresponding to fully distributed amplification. Required for an accurate estimation of the bit error rate, the complete knowledge of signal statistics is crucial for modern transmission links with strong inherent nonlinearity. Therefore, we implemented the advanced multicanonical Monte Carlo (MMC) method, acknowledged for its efficiency in estimating distribution tails. We have accurately computed acknowledged for its efficiency in estimating distribution tails. We have accurately computed marginal probability density functions for soliton parameters, by numerical modelling of Fokker-Plank equation applying the MMC simulation technique. Moreover, applying a powerful MMC method we have studied the BER penalty caused by deviations from the optimal decision level in systems employing in-line 2R optical regeneration. We have demonstrated that in such systems the analytical linear approximation that makes a better fit in the central part of the regenerator nonlinear transfer function produces more accurate approximation of the BER and BER penalty. We present a statistical analysis of RZ-DPSK optical signal at direct detection receiver with Mach-Zehnder interferometer demodulation
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.
Resumo:
Purpose: Published data indicate that the polar lipid content of human meibomian gland secretions (MGS) could be anything between 0.5% and 13% of the total lipid. The tear film phospholipid composition has not been studied in great detail and it has been understood that the relative proportions of lipids in MGS would be maintained in the tear film. The purpose of this work was to determine the concentration of phospholipids in the human tear film. Methods: Liquid chromatography mass spectrometry (LCMS) and thin layer chromatography (TLC) were used to determine the concentration of phospholipid in the tear film. Additionally, an Amplex Red phosphatidylcholine-specific phospholipase C (PLC) assay kit was used for determination of the activity of PLC in the tear film. Results: Phospholipids were not detected in any of the tested human tear samples with the low limit of detection being 1.3 µg/mL for TLC and 4 µg/mL for liquid chromatography mass spectrometry. TLC indicated that diacylglycerol (DAG) may be present in the tear film. PLC was in the tear film with an activity determined at approximately 15 mU/mL, equivalent to the removal of head groups from phosphatidylcholine at a rate of approximately 15 µM/min. Conclusions: This work shows that phospholipid was not detected in any of the tested human tear samples (above the lower limits of detection as described) and suggests the presence of DAG in the tear film. DAG is known to be at low concentrations in MGS. These observations indicate that PLC may play a role in modulating the tear film phospholipid concentration.
Resumo:
Background: Poor diet is thought to be a risk factor for many diseases, including age-related macular disease (ARMD), which is the leading cause of blind registration in those aged over 60 years in the developed world. The aims of this study were 1) to evaluate the dietary food intake of three subject groups: participants under the age of 50 years without ARMD (U50), participants over the age of 50 years without ARMD (O50), and participants with ARMD (AMD), and 2) to obtain information on nutritional supplement usage. Methods: A prospective cross-sectional study designed in a clinical practice setting. Seventy-four participants were divided into three groups: U50; 20 participants aged < 50 years, from 21 to 40 (mean ± SD, 37.7 ± 10.1 years), O50; 27 participants aged > 50 years, from 52 to 77 (62.7 ± 6.8 years), and ARMD; 27 participants aged > 50 years with ARMD, from 55 to 79 (66.0 ± 5.8 years). Participants were issued with a three-day food diary, and were also asked to provide details of any daily nutritional supplements. The diaries were analysed using FoodBase 2000 software. Data were input by one investigator and statistically analysed using Microsoft Excel for Microsoft Windows XP software, employing unpaired t-tests. Results: Group O50 consumed significantly more vitamin C (t = 3.049, p = 0.005) and significantly more fibre (t = 2.107, p = 0.041) than group U50. Group ARMD consumed significantly more protein (t = 3.487, p = 0.001) and zinc (t = 2.252, p = 0.029) than group O50. The ARMD group consumed the highest percentage of specific ocular health supplements and the U50 group consumed the most multivitamins. Conclusions: We did not detect a deficiency of any specific nutrient in the diets of those with ARMD compared with age- and gender-matched controls. ARMD patients may be aware of research into use of nutritional supplementation to prevent progression of their condition.
Resumo:
The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
Background and Aims: Consumption of antioxidant nutrients can reduce the risk of progression of age-related macular degeneration (AMD) - the leading cause of visual impairment in adults over the age of 50 years in the UK. Lutein and zeaxanthin (L&Z) are of particular interest because they are selectively absorbed by the central retina. The objectives of this study were to analyse the dietary intake of a group of AMD patients, assess their ability to prepare and cook healthy food, and to make comparisons with people not affected by AMD. Methods: 158 participants with AMD were recruited via the UK charity The Macular Society, and fifty participants without AMD were recruited from optometric practice. A telephone interview was conducted by trained workers where participants completed a 24 hour food diary, and answered questions about cooking and shopping capabilities. Results: In the AMD group, the average L&Z intake was low in for both males and females. Those able to cook a hot meal consumed significantly more L&Z than those who were not able. Most participants were not consuming the recommended dietary allowance of fibre, calcium, vitamin D and E, and calorific intake was also lower than recommendations for their age-group. The non-AMD group consumed more kilocalories and more nutrients than the AMD group, but the L&Z intake was similar to those with AMD. The main factor that influenced participant’s food choices was personal preference. Conclusion: For an ‘informed’ population, many AMD participants were under-consuming nutrients considered to be useful for their condition. Participants without AMD were more likely to reach recommended daily allowance values for energy and a range of nutrients. It is therefore essential to design more effective dietary education and dissemination methods for people with, and at risk of, AMD.
Resumo:
Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: Aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.
Resumo:
Background and objective: Safe prescribing requires accurate and practical information about drugs. Our objective was to measure the utility of current sources of prescribing guidance when used to inform practical prescribing decisions, and to compare current sources of prescribing guidance in the UK with idealized prescribing guidance. Methods: We developed 25 clinical scenarios. Two independent assessors rated and ranked the performance of five common sources of prescribing guidance in the UK when used to answer the clinical scenarios. A third adjudicator facilitated review of any disparities. An idealized list of contents for prescribing guidance was developed and sent for comments to academics and users of prescribing guidance. Following consultation an operational check was used to assess compliance with the idealized criteria. The main outcome measures were relative utility in answering the clinical scenarios and compliance with the idealized prescribing guidance. Results: Current sources of prescribing guidance used in the UK differ in their utility, when measured using clinical scenarios. The British National Formulary (BNF) and EMIS LV were the best performing sources in terms of both ranking [mean rank 1·24 and 2·20] and rating [%excellent or adequate 100% and 72%]. Current sources differed in the extent to which they fulfilled criteria for ideal prescribing guidance, but the BNF, and EMIS LV to a lesser extent, closely matched the criteria. Discussion: We have demonstrated how clinical scenarios can be used to assess prescribing guidance resources. Producers of prescribing guidance documents should consider our idealized template. Prescribers require high-quality information to support their practice. Conclusion: Our test was helpful in distinguishing between prescribing resources. Producers of prescribing guidance should consider the utility of their products to end-users, particularly in those more complex areas where prescribers may need most support. Existing UK prescribing guidance resources differ in their ability to provide assistance to prescribers. © 2010 Blackwell Publishing Ltd.
Resumo:
The major purpose of this study was to ascertain how needs assessment findings and methodologies are accepted by public decision makers in the U.S. Virgin Islands. To accomplish this, the following five different needs assessments were executed: (1) population survey; (2) key informants survey; (3) community forum; (4) rates-under-treatment (RUT); and (5) social indicators analysis. The assessments measured unmet needs of older persons regarding transportation, in-home care, and socio-recreation services, and determined which of the five methodologies is most costly, time consuming, and valid.^ The results of a five-way comparative analysis was presented to public sector decision makers who were surveyed to determine whether they are influenced more by needs assessment findings, or by the methodology used, and to ascertain the factors that lead to their acceptance of needs assessment findings and methodologies.^ The survey results revealed that acceptance of findings and methodology is influenced by the congruency of the findings with decision makers' goals and objectives, feasibility of the findings, and credibility of the researcher.^ The study also found that decision makers are influenced equally by needs assessment findings and methodology; that they prefer population surveys, although they are the most expensive and time consuming of the methodologies; that different types of needs assessments produce different results; and, that needs assessment is an essential program planning tool. Executive decision makers are found to be influenced more by management factors than by legal and political factors, while legislative decision makers are influenced more by legal factors. Decision makers overwhelmingly view their leadership style as democratic.^ A typology of the five needs assessments, highlighting their strengths and weaknesses, is offered as a planning guide for public decision makers. ^
Resumo:
The physics of self-organization and complexity is manifested on a variety of biological scales, from large ecosystems to the molecular level. Protein molecules exhibit characteristics of complex systems in terms of their structure, dynamics, and function. Proteins have the extraordinary ability to fold to a specific functional three-dimensional shape, starting from a random coil, in a biologically relevant time. How they accomplish this is one of the secrets of life. In this work, theoretical research into understanding this remarkable behavior is discussed. Thermodynamic and statistical mechanical tools are used in order to investigate the protein folding dynamics and stability. Theoretical analyses of the results from computer simulation of the dynamics of a four-helix bundle show that the excluded volume entropic effects are very important in protein dynamics and crucial for protein stability. The dramatic effects of changing the size of sidechains imply that a strategic placement of amino acid residues with a particular size may be an important consideration in protein engineering. Another investigation deals with modeling protein structural transitions as a phase transition. Using finite size scaling theory, the nature of unfolding transition of a four-helix bundle protein was investigated and critical exponents for the transition were calculated for various hydrophobic strengths in the core. It is found that the order of the transition changes from first to higher order as the strength of the hydrophobic interaction in the core region is significantly increased. Finally, a detailed kinetic and thermodynamic analysis was carried out in a model two-helix bundle. The connection between the structural free-energy landscape and folding kinetics was quantified. I show how simple protein engineering, by changing the hydropathy of a small number of amino acids, can enhance protein folding by significantly changing the free energy landscape so that kinetic traps are removed. The results have general applicability in protein engineering as well as understanding the underlying physical mechanisms of protein folding. ^
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: (1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; (2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and (3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.