953 resultados para statistical narrow band model
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
The optimal design of a heat exchanger system is based on given model parameters together with given standard ranges for machine design variables. The goals set for minimizing the Life Cycle Cost (LCC) function which represents the price of the saved energy, for maximizing the momentary heat recovery output with given constraints satisfied and taking into account the uncertainty in the models were successfully done. Nondominated Sorting Genetic Algorithm II (NSGA-II) for the design optimization of a system is presented and implemented inMatlab environment. Markov ChainMonte Carlo (MCMC) methods are also used to take into account the uncertainty in themodels. Results show that the price of saved energy can be optimized. A wet heat exchanger is found to be more efficient and beneficial than a dry heat exchanger even though its construction is expensive (160 EUR/m2) compared to the construction of a dry heat exchanger (50 EUR/m2). It has been found that the longer lifetime weights higher CAPEX and lower OPEX and vice versa, and the effect of the uncertainty in the models has been identified in a simplified case of minimizing the area of a dry heat exchanger.
Resumo:
Purpose The purpose of the present study was to evaluate the retinal toxicity of a single dose of intravitreal docosahexaenoic acid (DHA) in rabbit eyes over a short-term period. Methods Sixteen New Zealand albino rabbits were selected for this pre-clinical study. Six concentrations of DHA (Brudy Laboratories, Barcelona, Spain) were prepared: 10 mg/50 µl, 5 mg/50 µl, 2'5 mg/50 µl, 50 µg/50 µl, 25 µg/50 µl, and 5 µg/50 µl. Each concentration was injected intravitreally in the right eye of two rabbits. As a control, the vehicle solution was injected in one eye of four animals. Retinal safety was studied by slit-lamp examination, and electroretinography. All the rabbits were euthanized one week after the intravitreal injection of DHA and the eyeballs were processed to morphologic and morphometric histological examination by light microscopy. At the same time aqueous and vitreous humor samples were taken to quantify the concentration of omega-3 acids by gas chromatography. Statistical analysis was performed by SPSS 21.0. Results Slit-lamp examination revealed an important inflammatory reaction on the anterior chamber of the rabbits injected with the higher concentrations of DHA (10 mg/50 µl, 5 mg/50 µl, 2'5 mg/50 µ) Lower concentrations showed no inflammation. Electroretinography and histological studies showed no significant difference between control and DHA-injected groups except for the group injected with 50 µg/50 µl. Conclusions Our results indicate that administration of intravitreal DHA is safe in the albino rabbit model up to the maximum tolerated dose of 25 µg/50 µl. Further studies should be performed in order to evaluate the effect of intravitreal injection of DHA as a treatment, alone or in combination, of different retinal diseases.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
Fusarium Head Blight (FHB) is a disease of great concern in wheat (Triticum aestivum). Due to its relatively narrow susceptible phase and environmental dependence, the pathosystem is suitable for modeling. In the present work, a mechanistic model for estimating an infection index of FHB was developed. The model is process-based driven by rates, rules and coefficients for estimating the dynamics of flowering, airborne inoculum density and infection frequency. The latter is a function of temperature during an infection event (IE), which is defined based on a combination of daily records of precipitation and mean relative humidity. The daily infection index is the product of the daily proportion of susceptible tissue available, infection frequency and spore cloud density. The model was evaluated with an independent dataset of epidemics recorded in experimental plots (five years and three planting dates) at Passo Fundo, Brazil. Four models that use different factors were tested, and results showed all were able to explain variation for disease incidence and severity. A model that uses a correction factor for extending host susceptibility and daily spore cloud density to account for post-flowering infections was the most accurate explaining 93% of the variation in disease severity and 69% of disease incidence according to regression analysis.
Resumo:
The Fed model is a widely used market valuation model. It is often used only on market analysis of the S&P 500 index as a shorthand measure for the attractiveness of equity, and as a timing device for allocating funds between equity and bonds. The Fed model assumes a fixed relationship between bond yield and earnings yield. This relationship is often assumed to be true in market valuation. In this paper we test the Fed model from historical perspective on the European markets. The markets of the United States are also includedfor comparison. The purpose of the tests is to determine if the Fed model and the underlying assumptions come true on different markets. The various tests are made on time-series data ranging from the year 1973 to the end of the year 2008. The statistical methods used are regressions analysis, cointegration analysis and Granger causality. The empirical results do not give strong support for the Fed model. The underlying relationships assumed by the Fed model are statistically not valid in most of the markets examined and therefore the model is not valid in valuation purposes generally. The results vary between the different markets which gives reason to suspect the general use of the Fed model in different market conditions and in different markets.
Resumo:
The purpose of the research project The poetics of the talking book is to contribute to the knowledge about patterns of understanding in young adults’ reception of fiction, which they listened to through audio books. The problem explored was: How do different groups of listeners receive fictive text presented as a talking book with variations regarding use of voice, engagement and sound effects? The problem formulation rendered four specific research questions: 1. What patterns can be identified in the listeners’ answers regarding story structure and cognitive content in a comparative perspective comprising different reading styles in the taped versions of the text? 2. What patterns of understanding in interpretative reading can be identified in different listeners? 3. Which thoughts do the listeners have about what the talking book should sound like? 4. What affordances for young adults with the functional disability of mild mental retardation can be made visible through guided literature conversations? The theoretical frame of reference was formed by text–reader-oriented literary theory, psychological schema theory, and research regarding voice quality and communication. The project was carried out in two steps. The first phase was to produce the audio books with two variations of reading practice of three short stories with an existential theme in each text. The second step comprised interviewing of 32 young adults (a special group with a reading handicap in form of mild mental retardation, and a reference group with no handicap). The interviews formed as literary conversation were carried out three times during one year. The phenomenological-hermeneutic approach focused on the life worlds of the participants as meaning seeking beings. The analysis was carried out using method triangulation, mainly using phenomenological meaning concentration. The double hermeneutics in use when interpreting the interpretations of the participants revealed a capacity for aesthetic reading of fiction in the special group as well as in the reference group. The aesthetic qualities were found sufficient in all variations of reading by the professional readers of the audio book they listened to. The young adults also could describe how they wanted the audio book to sound: just as if you were reading yourself. A model describing the analytical steps and concepts in use was a result that can serve as an outline of a poetics for the talking book. Unexpected research results were how important the guided literary conversation turned out to be in order to realise the affordances given by the texts regarding exploration of existential themes in the young adults’ life worlds. Thus the result of the research project can be positioned as a piece of emancipatory research stressing the importance of including this group of young adults in the society’s conversation about culture and meaning.
Resumo:
A study about the spatial variability of data of soil resistance to penetration (RSP) was conducted at layers 0.0-0.1 m, 0.1-0.2 m and 0.2-0.3 m depth, using the statistical methods in univariate forms, i.e., using traditional geostatistics, forming thematic maps by ordinary kriging for each layer of the study. It was analyzed the RSP in layer 0.2-0.3 m depth through a spatial linear model (SLM), which considered the layers 0.0-0.1 m and 0.1-0.2 m in depth as covariable, obtaining an estimation model and a thematic map by universal kriging. The thematic maps of the RSP at layer 0.2-0.3 m depth, constructed by both methods, were compared using measures of accuracy obtained from the construction of the matrix of errors and confusion matrix. There are similarities between the thematic maps. All maps showed that the RSP is higher in the north region.
Resumo:
Biotechnology has been recognized as the key strategic technology for industrial growth. The industry is heavily dependent on basic research. Finland continues to rank in the top 10 of Europe's most innovative countries in terms of tax-policy, education system, infrastructure and the number of patents issued. Regardless of the excellent statistical results, the output of this innovativeness is below acceptable. Research on the issues hindering the output creation has already been done and the identifiable weaknesses in the Finland's National Innovation system are the non-existent growth of entrepreneurship and the missing internationalization. Finland is proven to have all the enablers of the innovation policy tools, but is lacking the incentives and rewards to push the enablers, such as knowledge and human capital, forward. Science Parks are the biggest operator in research institutes in the Finnish Science and Technology system. They exist with the purpose of speeding up the commercialization process of biotechnology innovations which usually include technological uncertainty, technical inexperience, business inexperience and high technology cost. Innovation management only internally is a rather historic approach, current trend drives towards open innovation model with strong triple helix linkages. The evident problems in the innovation management within the biotechnology industry are examined through a case study approach including analysis of the semi-structured interviews which included biotechnology and business expertise from Turku School of Economics. The results from the interviews supported the theoretical implications as well as conclusions derived from the pilot survey, which focused on the companies inside Turku Science Park network. One major issue that the Finland's National innovation system is struggling with is the fact that it is technology driven, not business pulled. Another problem is the university evaluation scale which focuses more on number of graduates and short-term factors, when it should put more emphasis on the cooperation success in the long-term, such as the triple helix connections with interaction and knowledge distribution. The results of this thesis indicated that there is indeed requirement for some structural changes in the Finland's National innovation system and innovation policy in order to generate successful biotechnology companies and innovation output. There is lack of joint output and scales of success, lack of people with experience, lack of language skills, lack of business knowledge and lack of growth companies.
Resumo:
Reliable predictions of remaining lives of civil or mechanical structures subjected to fatigue damage are very difficult to be made. In general, fatigue damage is extremely sensitive to the random variations of material mechanical properties, environment and loading. These variations may induce large dispersions when the structural fatigue life has to be predicted. Wirsching (1970) mentions dispersions of the order of 30 to 70 % of the mean calculated life. The presented paper introduces a model to estimate the fatigue damage dispersion based on known statistical distributions of the fatigue parameters (material properties and loading). The model is developed by expanding into Taylor series the set of equations that describe fatigue damage for crack initiation.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
This research concerns different statistical methods that assist to increase the demand forecasting accuracy of company X’s forecasting model. Current forecasting process was analyzed in details. As a result, graphical scheme of logical algorithm was developed. Based on the analysis of the algorithm and forecasting errors, all the potential directions for model future improvements in context of its accuracy were gathered into the complete list. Three improvement directions were chosen for further practical research, on their basis, three test models were created and verified. Novelty of this work lies in the methodological approach of the original analysis of the model, which identified its critical points, as well as the uniqueness of the developed test models. Results of the study formed the basis of the grant of the Government of St. Petersburg.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
The present study aimed to study the effects of exercise training (ET) performed by rats on a 10-week high-fructose diet on metabolic, hemodynamic, and autonomic changes, as well as intraocular pressure (IOP). Male Wistar rats receiving fructose overload in drinking water (100 g/L) were concomitantly trained on a treadmill for 10 weeks (FT group) or kept sedentary (F group), and a control group (C) was kept in normal laboratory conditions. The metabolic evaluation comprised the Lee index, glycemia, and insulin tolerance test (KITT). Arterial pressure (AP) was measured directly, and systolic AP variability was performed to determine peripheral autonomic modulation. ET attenuated impaired metabolic parameters, AP, IOP, and ocular perfusion pressure (OPP) induced by fructose overload (FT vs F). The increase in peripheral sympathetic modulation in F rats, demonstrated by systolic AP variance and low frequency (LF) band (F: 37±2, 6.6±0.3 vs C: 26±3, 3.6±0.5 mmHg2), was prevented by ET (FT: 29±3, 3.4±0.7 mmHg2). Positive correlations were found between the LF band and right IOP (r=0.57, P=0.01) and left IOP (r=0.64, P=0.003). Negative correlations were noted between KITT values and right IOP (r=-0.55, P=0.01) and left IOP (r=-0.62, P=0.005). ET in rats effectively prevented metabolic abnormalities and AP and IOP increases promoted by a high-fructose diet. In addition, ocular benefits triggered by exercise training were associated with peripheral autonomic improvement.