966 resultados para Allometric Scaling
Resumo:
Corporate decision to scale Agile Software development methodologies in offshoring environment has been obstructed due to possible challenges in scaling agile as agile methodologies are regarded to be suitable for small project and co-located team only. Although model such as Agile Scaling Model (ASM) has been developed for scaling Agile with different factors, inabilities of companies to figure out challenges and addressing them lead to failure of project rather than gaining the benefits of using agile methodologies. This failure can be avoided, when scaling agile in IT offshoring environment, by determining key challenges associated in scaling agile in IT offshoring environment and then preparing strategies for addressing those key challenges. These key challenges in scaling agile with IT offshoring environment can be determined by studying issues related with Offshoring and Agile individually and also considering the positive impact of agile methodology in offshoring environment. Then, possible strategies to tackle these key challenges are developed according to the nature of individual challenges and utilizing the benefits of different agile methodologies to address individual situation. Thus, in this thesis, we proposed strategy of using hybrid agile method, which is increasing trend due to adaptive nature of Agile. Determination of the key challenges and possible strategies for tackling those challenges are supported with the survey conducted in the researched organization.
Resumo:
Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.
Resumo:
The purpose of this thesis is to study the scalability of small break LOCA experiments. The study is performed on the experimental data, as well as on the results of thermal hydraulic computation performed on TRACE code. The SBLOCA experiments were performed on PACTEL facility situated at LUT. The temporal scaling of the results was done by relating the total coolant mass in the system with the initial break mass flow and using the quotient to scale the experiment time. The results showed many similarities in the behaviour of pressure and break mass flow between the experiments.
Resumo:
Les algorithmes d'apprentissage profond forment un nouvel ensemble de méthodes puissantes pour l'apprentissage automatique. L'idée est de combiner des couches de facteurs latents en hierarchies. Cela requiert souvent un coût computationel plus elevé et augmente aussi le nombre de paramètres du modèle. Ainsi, l'utilisation de ces méthodes sur des problèmes à plus grande échelle demande de réduire leur coût et aussi d'améliorer leur régularisation et leur optimization. Cette thèse adresse cette question sur ces trois perspectives. Nous étudions tout d'abord le problème de réduire le coût de certains algorithmes profonds. Nous proposons deux méthodes pour entrainer des machines de Boltzmann restreintes et des auto-encodeurs débruitants sur des distributions sparses à haute dimension. Ceci est important pour l'application de ces algorithmes pour le traitement de langues naturelles. Ces deux méthodes (Dauphin et al., 2011; Dauphin and Bengio, 2013) utilisent l'échantillonage par importance pour échantilloner l'objectif de ces modèles. Nous observons que cela réduit significativement le temps d'entrainement. L'accéleration atteint 2 ordres de magnitude sur plusieurs bancs d'essai. Deuxièmement, nous introduisont un puissant régularisateur pour les méthodes profondes. Les résultats expérimentaux démontrent qu'un bon régularisateur est crucial pour obtenir de bonnes performances avec des gros réseaux (Hinton et al., 2012). Dans Rifai et al. (2011), nous proposons un nouveau régularisateur qui combine l'apprentissage non-supervisé et la propagation de tangente (Simard et al., 1992). Cette méthode exploite des principes géometriques et permit au moment de la publication d'atteindre des résultats à l'état de l'art. Finalement, nous considérons le problème d'optimiser des surfaces non-convexes à haute dimensionalité comme celle des réseaux de neurones. Tradionellement, l'abondance de minimum locaux était considéré comme la principale difficulté dans ces problèmes. Dans Dauphin et al. (2014a) nous argumentons à partir de résultats en statistique physique, de la théorie des matrices aléatoires, de la théorie des réseaux de neurones et à partir de résultats expérimentaux qu'une difficulté plus profonde provient de la prolifération de points-selle. Dans ce papier nous proposons aussi une nouvelle méthode pour l'optimisation non-convexe.
Resumo:
We establish numerically the validity of Huberman-Rudnick scaling relation for Lyapunov exponents during the period doubling route to chaos in one dimensional maps. We extend our studies to the context of a combination map. where the scaling index is found to be different.
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
Bibliography: p. 22-24.
Resumo:
This is the end of the scaling analysis we saw in class on Friday. In class, we managed to scale the mass conservation equation and the x-momentum equation, but we didn't finish scaling the z-momentum equation in order to arrive at the hydrostatic approximation.
Resumo:
This project investigates the effectiveness and feasibility of scaling-up an eco-bio-social approach for implementing an integrated community-based approach for dengue prevention in comparison with existing insecticide-based and emerging biolarvicide-based programs in an endemic setting in Machala, Ecuador.