931 resultados para Performance prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work is to predict the temperature distribution of partially submersed umbilical cables under different operating and environmental conditions. The commercial code Fluent® was used to simulate the heat transfer and the air fluid flow of part of a vertical umbilical cable near the air-water interface. A free-convective three-dimensional turbulent flow in open-ended vertical annuli was solved. The influence of parameters such as the heat dissipating rate, wind velocity, air temperature and solar radiation was analyzed. The influence of the presence of a radiation shield consisting of a partially submersed cylindrical steel tube was also considered. The air flow and the buoyancydriven convective heat transfer in the annular region between the steel tube and the umbilical cable were calculated using the standard k-ε turbulence model. The radiative heat transfer between the umbilical external surface and the radiation shield was calculated using the Discrete Ordinates model. The results indicate that the influence of a hot environment and intense solar radiation may affect the umbilical cable performance in its dry portion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nella tesi si analizzano le principali fonti del rumore aeronautico, lo stato dell'arte dal punto di vista normativo, tecnologico e procedurale. Si analizza lo stato dell'arte anche riguardo alla classificazione degli aeromobili, proponendo un nuovo indice prestazionale in alternativa a quello indicato dalla metodologia di certificazione (AC36-ICAO) Allo scopo di diminuire l'impatto acustico degli aeromobili in fase di atterraggio, si analizzano col programma INM i benefici di procedure CDA a 3° rispetto alle procedure tradizionali e, di seguito di procedure CDA ad angoli maggiori in termini di riduzione di lunghezza e di area delle isofoniche SEL85, SEL80 e SEL75.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study is to provide empirical evidence on how ownership structure and owner’s identity affect performance, in the banking industry by using a panel of Indonesia banks over the period 2000–2009. Firstly, we analysed the impact of the presence of multiple blockholders on bank ownership structure and performance. Building on multiple agency and principal-principal theories, we investigated whether the presence and shares dispersion across blockholders with different identities (i.e. central and regional government; families; foreign banks and financial institutions) affected bank performance, in terms of profitability and efficiency. We found that the number of blockholders has a negative effect on banks’ performance, while blockholders’ concentration has a positive effect. Moreover, we observed that the dispersion of ownership across different types of blockholders has a negative effect on banks’ performance. We interpret such results as evidence that, when heterogeneous blockholders are present, the disadvantage from conflicts of interests between blockholders seems to outweigh the advantage of the increase in additional monitoring by additional blockholder. Secondly, we conducted a joint analysis of the static, selection, and dynamic effects of different types of ownership on banks’ performance. We found that regional banks and foreign banks have a higher profitability and efficiency as compared to domestic private banks. In the short-run, foreign acquisitions and domestic M&As reduce the level of overhead costs, while in the long-run they increase the Net Interest Margin (NIM). Further, we analysed NIM determinants, to asses the impact of ownership on bank business orientation. Our findings lend support to our prediction that the NIM determinants differs accordingly to the type of bank ownership. We also observed that banks that experienced changes in ownership, such as foreign-acquired banks, manifest different interest margin determinants with respect to domestic or foreign banks that did not experience ownership rearrangements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using path analysis, the present investigation was done to clarify possible causal linkages among general scholastic aptitude, academic achievement in mathematics, self-concept of ability, and performance on a mathematics examination. Subjects were 122 eighth-grade students who completed a mathematics examination as well as a measure of self-concept of ability. Aptitude and achievement measures were obtained from school records. Analysis showed sex differences in prediction of performance on the mathematics examination. For boys, this performance could be predicted from scholastic aptitude and previous achievement in mathematics. For girls, performance only could be predicted from previous achievement in mathematics. These results indicate that the direction, strength, and magnitude of relations among these variables differed for boys and girls, while mean levels of performance did not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the influence of age, familiarity, and level of exposure on the metamemorial skill of prediction accuracy on a future test. Young (17 to 23 years old) and middle-aged adults (35 to 50 years old) were asked to predict their memory for text material. Participants made predictions on a familiar text and an unfamiliar text, at three different levels of exposure to each. The middle-aged adults were superior to the younger adults at predicting performance. This finding indicates that metamemory may increase from youth to middle age. Other findings include superior prediction accuracy for unfamiliar compared to familiar material, a result conflicting with previous findings, and an interaction between level of exposure and familiarity that appears to modify the main effects of those variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The loose and stringent Asthma Predictive Indices (API), developed in Tucson, are popular rules to predict asthma in preschool children. To be clinically useful, they require validation in different settings. Objective To assess the predictive performance of the API in an independent population and compare it with simpler rules based only on preschool wheeze. Methods We studied 1954 children of the population-based Leicester Respiratory Cohort, followed up from age 1 to 10 years. The API and frequency of wheeze were assessed at age 3 years, and we determined their association with asthma at ages 7 and 10 years by using logistic regression. We computed test characteristics and measures of predictive performance to validate the API and compare it with simpler rules. Results The ability of the API to predict asthma in Leicester was comparable to Tucson: for the loose API, odds ratios for asthma at age 7 years were 5.2 in Leicester (5.5 in Tucson), and positive predictive values were 26% (26%). For the stringent API, these values were 8.2 (9.8) and 40% (48%). For the simpler rule early wheeze, corresponding values were 5.4 and 21%; for early frequent wheeze, 6.7 and 36%. The discriminative ability of all prediction rules was moderate (c statistic ≤ 0.7) and overall predictive performance low (scaled Brier score < 20%). Conclusion Predictive performance of the API in Leicester, although comparable to the original study, was modest and similar to prediction based only on preschool wheeze. This highlights the need for better prediction rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Outpatient Bleeding Risk Index (OBRI) and the Kuijer, RIETE and Kearon scores are clinical prognostic scores for bleeding in patients receiving oral anticoagulants for venous thromboembolism (VTE). We prospectively compared the performance of these scores in elderly patients with VTE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Guidelines for the prevention of coronary heart disease (CHD) recommend use of Framingham-based risk scores that were developed in white middle-aged populations. It remains unclear whether and how CHD risk prediction might be improved among older adults. We aimed to compare the prognostic performance of the Framingham risk score (FRS), directly and after recalibration, with refit functions derived from the present cohort, as well as to assess the utility of adding other routinely available risk parameters to FRS. Methods Among 2193 black and white older adults (mean age, 73.5 years) without pre-existing cardiovascular disease from the Health ABC cohort, we examined adjudicated CHD events, defined as incident myocardial infarction, CHD death, and hospitalization for angina or coronary revascularization. Results During 8-year follow-up, 351 participants experienced CHD events. The FRS poorly discriminated between persons who experienced CHD events vs. not (C-index: 0.577 in women; 0.583 in men) and underestimated absolute risk prediction by 51% in women and 8% in men. Recalibration of the FRS improved absolute risk prediction, particulary for women. For both genders, refitting these functions substantially improved absolute risk prediction, with similar discrimination to the FRS. Results did not differ between whites and blacks. The addition of lifestyle variables, waist circumference and creatinine did not improve risk prediction beyond risk factors of the FRS. Conclusions The FRS underestimates CHD risk in older adults, particularly in women, although traditional risk factors remain the best predictors of CHD. Re-estimated risk functions using these factors improve accurate estimation of absolute risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prognostic assessment is important for the management of patients with a pulmonary embolism (PE). A number of clinical prediction rules (CPRs) have been proposed for stratifying PE mortality risk. The aim of this systematic review was to assess the performance of prognostic CPRs in identifying a low-risk PE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Uncertainty exists about the performance of the Framingham risk score when applied in different populations. OBJECTIVE: We assessed calibration of the Framingham risk score (ie, relationship between predicted and observed coronary event rates) in US and non-US populations free of cardiovascular disease. METHODS: We reviewed studies that evaluated the performance of the Framingham risk score to predict first coronary events in a validation cohort, as identified by Medline, EMBASE, BIOSIS, and Cochrane library searches (through August 2005). Two reviewers independently assessed 1496 studies for eligibility, extracted data, and performed quality assessment using predefined forms. RESULTS: We included 25 validation cohorts of different population groups (n = 128,000) in our main analysis. Calibration varied over a wide range from under- to overprediction of absolute risk by factors of 0.57 to 2.7. Risk prediction for 7 cohorts (n = 18658) from the United States, Australia, and New Zealand was well calibrated (corresponding figures: 0.87-1.08; for the 5 biggest cohorts). The estimated population risks for first coronary events were strongly associated (goodness of fit: R2 = 0.84) and in good agreement with observed risks (coefficient for predicted risk: beta = 0.84; 95% CI 0.41-1.26). In 18 European cohorts (n = 109499), the corresponding figures indicated close association (R2 = 0.72) but substantial overprediction (beta = 0.58, 95% CI 0.39-0.77). The risk score was well calibrated on the intercept for both population clusters. CONCLUSION: The Framingham score is well calibrated to predict first coronary events in populations from the United States, Australia, and New Zealand. Overestimation of absolute risk in European cohorts requires recalibration procedures.