942 resultados para Linear Static Analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Antigenic peptide is presented to a T-cell receptor (TCR) through the formation of a stable complex with a major histocompatibility complex (MHC) molecule. Various predictive algorithms have been developed to estimate a peptide's capacity to form a stable complex with a given MHC class II allele, a technique integral to the strategy of vaccine design. These have previously incorporated such computational techniques as quantitative matrices and neural networks. A novel predictive technique is described, which uses molecular modeling of predetermined crystal structures to estimate the stability of an MHC class II-peptide complex. The structures are remodeled, energy minimized, and annealed before the energetic interaction is calculated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be

Relevância:

40.00% 40.00%

Publicador:

Resumo:

INTRODUCTION: Upper airway measurement can be important for the diagnosis of breathing disorders. Acoustic reflection (AR) is an accepted tool for studying the airway. Our objective was to investigate the differences between cone-beam computed tomography (CBCT) and AR in calculating airway volumes and areas. METHODS: Subjects with prescribed CBCT images as part of their records were also asked to have AR performed. A total of 59 subjects (mean age, 15 ± 3.8 years) had their upper airway (5 areas) measured from CBCT images, acoustic rhinometry, and acoustic pharyngometry. Volumes and minimal cross-sectional areas were extracted and compared with software. RESULTS: Intraclass correlation on 20 randomly selected subjects, remeasured 2 weeks apart, showed high reliability (r >0.77). Means of total nasal volume were significantly different between the 2 methods (P = 0.035), but anterior nasal volume and minimal cross-sectional area showed no differences (P = 0.532 and P = 0.066, respectively). Pharyngeal volume showed significant differences (P = 0.01) with high correlation (r = 0.755), whereas pharyngeal minimal cross-sectional area showed no differences (P = 0.109). The pharyngeal volume difference may not be considered clinically significant, since it is 758 mm3 for measurements showing means of 11,000 ± 4000 mm3. CONCLUSIONS: CBCT is an accurate method for measuring anterior nasal volume, nasal minimal cross-sectional area, pharyngeal volume, and pharyngeal minimal cross-sectional area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

40.00% 40.00%

Publicador: