899 resultados para Models performance
Resumo:
Many theories for the Madden-Julian oscillation (MJO) focus on diabatic processes, particularly the evolution of vertical heating and moistening. Poor MJO performance in weather and climate models is often blamed on biases in these processes and their interactions with the large-scale circulation. We introduce one of three components of a model-evaluation project, which aims to connect MJO fidelity in models to their representations of several physical processes, focusing on diabatic heating and moistening. This component consists of 20-day hindcasts, initialised daily during two MJO events in winter 2009-10. The 13 models exhibit a range of skill: several have accurate forecasts to 20 days' lead, while others perform similarly to statistical models (8-11 days). Models that maintain the observed MJO amplitude accurately predict propagation, but not vice versa. We find no link between hindcast fidelity and the precipitation-moisture relationship, in contrast to other recent studies. There is also no relationship between models' performance and the evolution of their diabatic-heating profiles with rain rate. A more robust association emerges between models' fidelity and net moistening: the highest-skill models show a clear transition from low-level moistening for light rainfall to mid-level moistening at moderate rainfall and upper-level moistening for heavy rainfall. The mid-level moistening, arising from both dynamics and physics, may be most important. Accurately representing many processes may be necessary, but not sufficient for capturing the MJO, which suggests that models fail to predict the MJO for a broad range of reasons and limits the possibility of finding a panacea.
Resumo:
Continuous casting is a casting process that produces steel slabs in a continuous manner with steel being poured at the top of the caster and a steel strand emerging from the mould below. Molten steel is transferred from the AOD converter to the caster using a ladle. The ladle is designed to be strong and insulated. Complete insulation is never achieved. Some of the heat is lost to the refractories by convection and conduction. Heat losses by radiation also occur. It is important to know the temperature of the melt during the process. For this reason, an online model was previously developed to simulate the steel and ladle wall temperatures during the ladle cycle. The model was developed as an ODE based model using grey box modeling technique. The model’s performance was acceptable and needed to be presented in a user friendly way. The aim of this thesis work was basically to design a GUI that presents steel and ladle wall temperatures calculated by the model and also allow the user to make adjustments to the model. This thesis work also discusses the sensitivity analysis of different parameters involved and their effects on different temperature estimations.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Ceramic parts are increasingly replacing metal parts due to their excellent physical, chemical and mechanical properties, however they also make them difficult to manufacture by traditional machining methods. The developments carried out in this work are used to estimate tool wear during the grinding of advanced ceramics. The learning process was fed with data collected from a surface grinding machine with tangential diamond wheel and alumina ceramic test specimens, in three cutting configurations: with depths of cut of 120 mu m, 70 mu m and 20 mu m. The grinding wheel speed was 35m/s and the table speed 2.3m/s. Four neural models were evaluated, namely: Multilayer Perceptron, Radial Basis Function, Generalized Regression Neural Networks and the Adaptive Neuro-Fuzzy Inference System. The models'performance evaluation routines were executed automatically, testing all the possible combinations of inputs, number of neurons, number of layers, and spreading. The computational results reveal that the neural models were highly successful in estimating tool wear, since the errors were lower than 4%.
Resumo:
Synopsis: Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. Our study seeks to reveal and analyse the causes for professionalization processes in international sport federations, the forms resulting from it, as well as related consequences. Abstract: AIM OF ABSTRACT/PAPER - RESEARCH QUESTION Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. In this context, professionalization seems to have been adopted by sport organisations as an appropriate strategy to respond to pressures such as becoming more “business-like”. The ongoing study seeks to reveal and analyse the internal and external causes for professionalization processes in international sport federations, the forms resulting from it (e.g. organisational, managerial, economic) as well as related consequences on objectives, values, governance methods, performance management or again rationalisation. THEORETICAL BACKGROUND/LITERATURE REVIEW Studies on sport as specific non-profit sector mainly focus on the prospect of the “professionalization of individuals” (Thibault, Slack & Hinings, 1991), often within sport clubs (Thiel, Meier & Cachay, 2006) and national sport federations (Seippel, 2002) or on organisational change (Griginov & Sandanski, 2008; Slack & Hinings, 1987, 1992; Slack, 1985, 2001), thus leaving broader analysis on governance, management and professionalization in sport organisations an unaccomplished task. In order to further current research on above-mentioned topics, our intention is to analyse causes, forms and consequences of professionalisation processes in international sport federations. The social theory of action (Coleman, 1986; Esser, 1993) has been defined as appropriate theoretical framework, deriving in the following a multi-level framework for the analysis of sport organisations (Nagel, 2007). In light of the multi-level framework, sport federations are conceptualised as corporative actors whose objectives are defined and implemented with regard to the interests of member organisations (Heinemann, 2004) and/or other pressure groups. In order to understand social acting and social structures (Giddens 1984) of sport federations, two levels are in the focus of our analysis: the macro level examining the environment at large (political, social, economic systems etc.) and the meso level (Esser, 1999) examining organisational structures, actions and decisions of the federation’s headquarter as well as member organisations. METHODOLOGY, RESEARCH DESIGN AND DATA ANALYSIS The multi-level framework mentioned seeks to gather and analyse information on causes, forms and consequences of professionalization processes in sport federations. It is applied in a twofold approach: first an exploratory study based on nine semi-structured interviews with experts from umbrella sport organisations (IOC, WADA, ASOIF, AIOWF, etc.) as well as the analysis of related documents, relevant reports (IOC report 2000 on governance reform, Agenda 2020, etc.) and important moments of change in the Olympic Movement (Olympic revenue share, IOC evaluation criteria, etc.); and secondly several case studies. Whereas the exploratory study seeks more the causes for professionalization on an external, internal and headquarter level as depicted in the literature, the case studies rather focus on forms and consequences. Applying our conceptual framework, the analysis of forms is built around three dimensions: 1) Individuals (persons and positions), 2) Processes, structures (formalisation, specialisation), 3) Activities (strategic planning). With regard to consequences, we centre our attention on expectations of and relationships with stakeholders (e.g. cooperation with business partners), structure, culture and processes (e.g. governance models, performance), and expectations of and relationships with member organisations (e.g. centralisation vs. regionalisation). For the case studies, a mixed-method approach is applied to collect relevant data: questionnaires for rather quantitative data, interviews for rather qualitative data, as well as document and observatory analysis. RESULTS, DISCUSSION AND IMPLICATIONS/CONCLUSIONS With regard to causes of professionalization processes, we analyse the content of three different levels: 1. the external level, where the main pressure derives from financial resources (stakeholders, benefactors) and important turning points (scandals, media pressure, IOC requirements for Olympic sports); 2. the internal level, where pressure from member organisations turned out to be less decisive than assumed (little involvement of member organisations in decision-making); 3. the headquarter level, where specific economic models (World Cups, other international circuits, World Championships), and organisational structures (decision-making procedures, values, leadership) trigger or hinder a federation’s professionalization process. Based on our first analysis, an outline for an economic model is suggested, distinguishing four categories of IFs: “money-generating IFs” being rather based on commercialisation and strategic alliances; “classical Olympic IFs” being rather reactive and dependent on Olympic revenue; “classical non-Olympic IFs” being rather independent of the Olympic Movement; and “money-receiving IFs” being dependent on benefactors and having strong traditions and values. The results regarding forms and consequences will be outlined in the presentation. The first results from the two pilot studies will allow us to refine our conceptual framework for subsequent case studies, thus extending our data collection and developing fundamental conclusions. References: Bayle, E., & Robinson, L. (2007). A framework for understanding the performance of national governing bodies of sport. European Sport Management Quarterly, 7, 249–268 Chantelat, P. (2001). La professionnalisation des organisations sportives: Nouveaux débats, nouveaux enjeux [Professionalisation of sport organisations]. Paris: L’Harmattan. Dowling, M., Edwards, J., & Washington, M. (2014). Understanding the concept of professionalization in sport management research. Sport Management Review. Advance online publication. doi: 10.1016/j.smr.2014.02.003 Ferkins, L. & Shilbury, D. (2012). Good Boards Are Strategic: What Does That Mean for Sport Governance? Journal of Sport Management, 26, 67-80. Thibault, L., Slack, T., & Hinings, B. (1991). Professionalism, structures and systems: The impact of professional staff on voluntary sport organizations. International Review for the Sociology of Sport, 26, 83–97.
Resumo:
This article investigates the performance of a model called Full-Scale Optimisation, which was presented recently and is used for financial investment advice. The investor’s preferences of expected risk and return are entered into the model, and a recommended portfolio is produced. This model is theoretically more accurate than the mainstream investment advice model, called Mean-Variance Optimization, as there are fewer assumptions made. Our investigation of the model’s performance is broader when it comes to investor preferences, and more general when it comes to investment type, as compared to previous studies. Our investigation shows that Full-Scale Optimisation is more widely applicable than earlier known.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
A Lontra Euroasiática foi alvo de quatro prospeções na Península Ibérica (1990-2008). Em 2003, foi publicado um modelo de distribuição da lontra, com base nos dados de presença/ausência das prospeções publicadas em 1998. Dadas as suas características, este tipo de modelos pode tornar-se um elemento chave nas estratégias de recuperação da lontra como também, de outras espécies, se comprovada a sua fiabilidade e capacidade de antecipar tendências na distribuição das mesmas. Assim, esta dissertação confrontou as previsões do modelo com os dados de distribuição de 2008, a fim de identificar potências áreas de discordância. Os resultados revelam que, o modelo de distribuição de lontra proposto, apesar de ter por base dados de 1998 e de não considerar explicitamente processos biológicos, conseguiu captar o essencial da relação espécie-ambiente, resultando num bom desempenho preditivo para a distribuição da mesma em Espanha, uma década depois da sua construção; Evolution of otter (Lutra lutra L.) distribution in the Iberian Peninsula: Models at different scales and their projection through space and time Abstract: The Eurasian otter was already surveyed four times in the Iberian Peninsula (1990-2008). In 2003, a distribution model for the otter based on presence/absence data from the survey published in 1998, was published. This type of models has advantages that can make it in a key element for otter conservation strategies and also, for other species, but only, if their reliability and capability to predict species distribution tendencies are validated. The present thesis compares the model predictions with 2008 data, in order to find potential mismatch areas. Results suggest that, although the distribution model for the otter was based on data from 1998 and, doesn’t include explicitly biological mechanisms, it managed to correctly identify the essence of the species-environment relationship, what was translated in a good predictive performance for its actual distribution in Spain, after a decade of its construction.
Resumo:
Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.
Resumo:
The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
The problem of providing a hybrid wired/wireless communications for factory automation systems is still an open issue, notwithstanding the fact that already there are some solutions. This paper describes the role of simulation tools on the validation and performance analysis of two wireless extensions for the PROFIBUS protocol. In one of them, the Intermediate Systems, which connect wired and wireless network segments, operate as repeaters. In the other one the Intermediate Systems operate as bridge. We also describe how the analytical analysis proposed for these kinds of networks can be used for the setting of some network parameters and for the guaranteeing real-time behaviour of the system. Additionally, we also compare the bridge-based solution simulation results with the analytical results.
Resumo:
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Mat`ern models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.