963 resultados para numerical integration methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the magnetic field control of convection instabilities and heat and mass transfer processesin magnetic fluids have been investigated by numerical simulations and theoretical considerations. Simulation models based on finite element and finite volume methods have been developed. In addition to standard conservation equations, themagnetic field inside the simulation domain is calculated from Maxwell equations and the necessary terms to take into account for the magnetic body force and magnetic dissipation have been added to the equations governing the fluid motion.Numerical simulations of magnetic fluid convection near the threshold supportedexperimental observations qualitatively. Near the onset of convection the competitive action of thermal and concentration density gradients leads to mostly spatiotemporally chaotic convection with oscillatory and travelling wave regimes, previously observed in binary mixtures and nematic liquid crystals. In many applications of magnetic fluids, the heat and mass transfer processes including the effects of external magnetic fields are of great importance. In addition to magnetic fluids, the concepts and the simulation models used in this study may be applied also to the studies of convective instabilities in ordinary fluids as well as in other binary mixtures and complex fluids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Main purpose of this thesis is to introduce a new lossless compression algorithm for multispectral images. Proposed algorithm is based on reducing the band ordering problem to the problem of finding a minimum spanning tree in a weighted directed graph, where set of the graph vertices corresponds to multispectral image bands and the arcs’ weights have been computed using a newly invented adaptive linear prediction model. The adaptive prediction model is an extended unification of 2–and 4–neighbour pixel context linear prediction schemes. The algorithm provides individual prediction of each image band using the optimal prediction scheme, defined by the adaptive prediction model and the optimal predicting band suggested by minimum spanning tree. Its efficiency has been compared with respect to the best lossless compression algorithms for multispectral images. Three recently invented algorithms have been considered. Numerical results produced by these algorithms allow concluding that adaptive prediction based algorithm is the best one for lossless compression of multispectral images. Real multispectral data captured from an airplane have been used for the testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Taloudellisen laskennan yhdistäminen elinkaariarviointiin (LCA) on alkanut kiinnostaa eri teollisuuden aloja maailmanlaajuisesti viime aikoina. Useat LCA-tietokoneohjelmat sisältävät kustannuslaskentaominaisuuksia ja yksittäiset projektit ovat yhdistäneet ympäristö- ja talouslaskentamenetelmiä. Tässä projektissa tutkitaan näiden yhdistelmien soveltuvuutta suomalaiselle sellu- ja paperiteollisuudelle, sekä kustannuslaskentaominaisuuden lisäämistä KCL:n LCA-ohjelmaan, KCL-ECO 3.0:aan. Kaikki tutkimuksen aikana löytyneet menetelmät, jotka yhdistävät LCA:n ja taloudellista laskentaa, on esitelty tässä työssä. Monet näistä käyttävät elinkaarikustannusarviointia (LCCA). Periaatteessa elinkaari määritellään eri tavalla LCCA:ssa ja LCA:ssa, mikä luo haasteita näiden menetelmien yhdistämiselle. Sopiva elinkaari tulee määritellä laskennan tavoitteiden mukaisesti. Työssä esitellään suositusmenetelmä, joka lähtee suomalaisen sellu- ja paperiteollisuuden erikoispiirteistä. Perusvaatimuksena on yhteensopivuus tavanomaisesti paperin LCA:ssa käytetyn elinkaaren kanssa. Menetelmän yhdistäminen KCL-ECO 3.0:aan on käsitelty yksityiskohtaisesti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The objectives of this study were to determine the proportions of psychiatric and substance use disorders suffered by emergency departments' (EDs') frequent users compared to the mainstream ED population, to evaluate how effectively these disorders were diagnosed in both groups of patients by ED physicians, and to determine if these disorders were predictive of a frequent use of ED services. METHODS: This study is a cross-sectional study with concurrent and retrospective data collection. Between November 2009 and June 2010, patients' mental health and substance use disorders were identified prospectively in face-to-face research interviews using a screening questionnaire (i.e. researcher screening). These data were compared to the data obtained from a retrospective medical chart review performed in August 2011, searching for mental health and substance use disorders diagnosed by ED physicians and recorded in the patients' ED medical files (i.e. ED physician diagnosis). The sample consisted of 399 eligible adult patients (≥18 years old) admitted to the urban, general ED of a University Hospital. Among them, 389 patients completed the researcher screening. Two hundred and twenty frequent users defined by >4 ED visits in the previous twelve months were included and compared to 169 patients with ≤4 ED visits in the same period (control group). RESULTS: Researcher screening showed that ED frequent users were more likely than members of the control group to have an anxiety, depressive disorder, post-traumatic stress disorder (PTSD), or suffer from alcohol, illicit drug abuse/addiction. Reviewing the ED physician diagnosis, we found that the proportions of mental health and substance use disorders diagnosed by ED physicians were low both among ED frequent users and in the control group. Using multiple logistic regression analyses to predict frequent ED use, we found that ED patients who screened positive for psychiatric disorders only and those who screened positive for both psychiatric and substance use disorders were more likely to be ED frequent users compared to ED patients with no disorder. CONCLUSIONS: This study found high proportions of screened mental health and/or substance use disorders in ED frequent users, but it showed low rates of detection of such disorders in day-to-day ED activities which can be a cause for concern. Active screening for these disorders in this population, followed by an intervention and/or a referral for treatment by a case-management team may constitute a relevant intervention for integration into a general ED setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este artículo se pretende mostrar cómo la utilización de métodos visuales en la investigación contribuye a potenciar la participación activa de las personas con TMG. Se utiliza como ejemplo un estudio de caso de corte cualitativo que incorpora tres actividades de componente visual (el dibujo “el río de la vida”, las fotografías y el dibujo de proyección de futuro) para favorecer la reflexión narrada que, sobre sus experiencias y vivencias, desarrollan cinco personas con TMG. El uso de las fotografías y dibujos en este estudio permite afirmar que estas estrategias se han mostrado válidas para acceder, en la medida que los participantes han querido, a esferas de vida personales en trayectorias vitales determinadas por la enfermedad mental

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organizing is a general problem for global firms. Firms are seeking a balance between responsiveness at the local level and efficiency through worldwide integration. In this, supply management is the focal point where external commercial supply market relations are connected with the firm's internal functions. Here, effective supplier relationship management (SRM) is essential. Global supply integration processes create new challenges for supply management professionals and new capabilities are required. Previous research has developed several models and tools for managers to manage and categorize different supplier relationship types, but the role of the firm's internal capability of managing supplier relationships in their global integration has been a clearly neglected issue. Hence, the main objective of this dissertation is to clarify how the capability of SRM may influence the firm's global competitiveness. This objective is divided into four research questions aiming to identify the elements of SRM capability, the internal factors of integration, the effect of SRM capability on strategy and how SRM capability is linked with global integration. The dissertation has two parts. The first part presents the theoretical approaches and practical implications from previous research and draws a synthesis on them. The second part comprises four empirical research papers addressing the research questions. Both qualitative and quantitative methods are utilized in this dissertation. The main contribution of this dissertation is that it aggregates the theoretical and conceptual perspectives applied to SRM research. Furthermore, given the lack of valid scales to measure capability, this study aimed to provide a foundation for an SRM capability scale by showing that the construct of SRM capability is formed of five separate elements. Moreover, SRM capability was found to be the enabler in efforts toward value chain integration. Finally, it was found that the effect of capability on global competitiveness is twofold: it reduces conflicts between responsiveness and integration, and it creates efficiency. Thus, by identifying and developing the firm's capabilities it is possible to improve performance, and hence, global competitiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article explores the possibilities offered by visual methods in the move towards inclusive research, reviewing some methodological implications of said research and reflecting on the potential of visual methods to meet these methodological requirements. A study into the impact of work on social inclusion and the social relationships of people suffering from severe mental illness (SMI) serves to illustrate the use of visual methods such as photo elicitation and graphic elicitation in the context of in-depth interviews with the aim of improving the aforementioned target group’s participation in research, participation understood as one of the basic elements of inclusive approaches. On the basis of this study, we reflect on the potential of visual methods to improve the inclusive approach to research and conclude that these methods are open and flexible in awarding participantsa voice, allowingpeople with SMI to express their needs, and therefore adding value to said approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agile coaching of a project team is one way to aid learning of the agile methods. The objective of this thesis is to present the agile coaching plan and to follow how complying the plan affects to the project teams. Furthermore, the agile methods are followed how they work in the projects. Two projects are used to help the research. From the thesis point of view, the task for the first project is to coach the project team and two new coaches. The task for the second project is also to coach the project team, but this time so that one of the new coaches acts as the coach. The agile methods Scrum process and Extreme programming are utilized by the projects. In the latter, the test driven development, continuous integration and pair programming are concentrated more precisely. The results of the work are based on the observations from the projects and the analysis derived from the observations. The results are divided to the effects of the coaching and to functionality of the agile methods in the projects. Because of the small sample set, the results are directional. The presented plan, to coach the agile methods, needs developing, but the results of the functionality of the agile methods are encouraging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays software testing and quality assurance have a great value in software development process. Software testing does not mean a concrete discipline, it is the process of validation and verification that starts from the idea of future product and finishes at the end of product’s maintenance. The importance of software testing methods and tools that can be applied on different testing phases is highly stressed in industry. The initial objectives for this thesis were to provide a sufficient literature review on different testing phases and for each of the phases define the method that can be effectively used for improving software’s quality. Software testing phases, chosen for study are: unit testing, integration testing, functional testing, system testing, acceptance testing and usability testing. The research showed that there are many software testing methods that can be applied at different phases and in the most of the cases the choice of the method should be done depending on software type and its specification. In the thesis the problem, concerned to each of the phases was identified; the method that can help in eliminating this problem was suggested and particularly described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.