17 resultados para pacs: engineering mathematics and mathematical techniques
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
This paper analyzes the possibilities of integrating cost information and engineering design. Special emphasis is put on finding the potential of using the activity-based costing (ABC) method. Today, the problem of cost estimation in engineering design is that there are two separate extremes of knowledge. On the one extreme, the engineers model the technical parametres behindcosts in great detail but do not get appropriate cost information to their elegant models. On the other extreme, the accounting professionals are stuck with traditional cost accounting methods driven by the procedures and cycles of financial accounting. Therefore, in many cases, the cost information needs of various decision making groups, for example design engineers, are not served satisfactorily. This paper studies if the activity-based costing (ABC) method could offer a compromise between the two extremes. Recognizing activities and activity chains as well as activity and cost drivers could be specially beneficial for design engineers. Also, recognizing the accurate and reliable product costs of existing products helps when doing variant design. However, ABC is not at its best if the cost system becomes too complicated. This is why a comprehensive ABC-cost information system with detailed cost information for the use of design engineers should be examined critically. ABC is at its best when considering such issues as which activities drive costs, the cost of product complexity, allocating indirect costs on the products, the relationships between processes and costs, and the cost of excess capacity.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Selostus: Suomalaisen kauran seleenipitoisuus vuosina 1997-1999
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
The objective in this Master’s Thesis was to determine VOC emissions from veneer drying in softwood plywood manufacturing. Emissions from plywood industry have become an important factor because of the tightened regulations worldwide. In this Thesis is researched quality and quantity of the VOCs from softwood veneer drying. One of the main objectives was to find out suitable cleaning techniques for softwood VOC emissions. In introduction part is presented veneer drying machines, wood mechanical and chemical properties. VOC control techniques and specified VOC limits are introduced also in the introduction part. Plywood mills have not had interest to VOC emissions previously nevertheless nowadays plywood mills worldwide must consider reduction of the emissions. This Thesis includes measuring of emissions from softwood veneer dryer, analyzation of measured test results and reviewing results. Different air conditions inside of the dryer were considered during planning of the measurements. Results of the emissions measurements were compared to the established laws. Results from this Thesis were softwood veneer dryer emissions in different air conditions. Emission control techniques were also studied for softwood veneer dryer emissions for further specific research.
Resumo:
The use of intensity-modulated radiotherapy (IMRT) has increased extensively in the modern radiotherapy (RT) treatments over the past two decades. Radiation dose distributions can be delivered with higher conformality with IMRT when compared to the conventional 3D-conformal radiotherapy (3D-CRT). Higher conformality and target coverage increases the probability of tumour control and decreases the normal tissue complications. The primary goal of this work is to improve and evaluate the accuracy, efficiency and delivery techniques of RT treatments by using IMRT. This study evaluated the dosimetric limitations and possibilities of IMRT in small (treatments of head-and-neck, prostate and lung cancer) and large volumes (primitive neuroectodermal tumours). The dose coverage of target volumes and the sparing of critical organs were increased with IMRT when compared to 3D-CRT. The developed split field IMRT technique was found to be safe and accurate method in craniospinal irradiations. By using IMRT in simultaneous integrated boosting of biologically defined target volumes of localized prostate cancer high doses were achievable with only small increase in the treatment complexity. Biological plan optimization increased the probability of uncomplicated control on average by 28% when compared to standard IMRT delivery. Unfortunately IMRT carries also some drawbacks. In IMRT the beam modulation is realized by splitting a large radiation field to small apertures. The smaller the beam apertures are the larger the rebuild-up and rebuild-down effects are at the tissue interfaces. The limitations to use IMRT with small apertures in the treatments of small lung tumours were investigated with dosimetric film measurements. The results confirmed that the peripheral doses of the small lung tumours were decreased as the effective field size was decreased. The studied calculation algorithms were not able to model the dose deficiency of the tumours accurately. The use of small sliding window apertures of 2 mm and 4 mm decreased the tumour peripheral dose by 6% when compared to 3D-CRT treatment plan. A direct aperture based optimization (DABO) technique was examined as a solution to decrease the treatment complexity. The DABO IMRT technique was able to achieve treatment plans equivalent with the conventional IMRT fluence based optimization techniques in the concave head-and-neck target volumes. With DABO the effective field sizes were increased and the number of MUs was reduced with a factor of two. The optimality of a treatment plan and the therapeutic ratio can be further enhanced by using dose painting based on regional radiosensitivities imaged with functional imaging methods.
Resumo:
The aim of this master’s thesis is to study how Agile method (Scrum) and open source software are utilized to produce software for a flagship product in a complex production environment. The empirical case and the used artefacts are taken from the Nokia MeeGo N9 product program, and from the related software program, called as the Harmattan. The single research case is analysed by using a qualitative method. The Grounded Theory principles are utilized, first, to find out all the related concepts from artefacts. Second, these concepts are analysed, and finally categorized to a core category and six supported categories. The result is formulated as the operation of software practices conceivable in circumstances, where the accountable software development teams and related context accepts a open source software nature as a part of business vision and the whole organization supports the Agile methods.
Resumo:
Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The aim of this Master’s Thesis is to develop project logistics functions in large scale en-gineering, procurement and construction projects. The background of the research topic is compounded from two separate subjects; OPAL Program and case study of an actual EPC project. The purpose is to examine Project Logistics process in accordance with OPAL Program as well as logistics process in focus EPC project. Both entities are researched by using the case study research methodology. Logistics process of the focus EPC project is described as well as presented and in addi-tion, logistics related findings and observations are introduced. Significant findings and observations are found from logistics costs as well as shipment volume estimations in the early phase of the focus ECP project. A notable finding is also that because goods were transported as readily assembled as possible it caused expensive oversized cargo deliveries. From findings and observation of the focus EPC project it can be derived that logistics has to be involved in the early sales phase in order to receive more accurate logistics cost esti-mations for project deliveries. It is also noticed that in order to obtain savings in logistics costs, oversized deliveries must be avoided.
Resumo:
This thesis develops a method for identifying students struggling in their mathematical studies at an early stage. It helps in directing support to students needing and benefiting from it the most. Thus, frustration felt by weaker students may decrease and therefore, hopefully, also drop outs of potential engineering students. The research concentrates on a combination of personality and intelligence aspects. Personality aspects gave information on conation and motivation for learning. This part was studied from the perspective of motivation and self-regulation. Intelligence aspects gave information on declarative and procedural knowledge: what had been taught and what was actually mastered. Students answered surveys on motivation and self-regulation in 2010 and 2011. Based on their answers, background information, results in the proficiency test, and grades in the first mathematics course, profiles describing the students were formed. In the following years, the profiles were updated with new information obtained each year. The profiles used to identify struggling students combine personality (motivation, selfregulation, and self-efficacy) and intelligence (declarative and procedural knowledge) aspects at the beginning of their studies. Identifying students in need of extra support is a good start, but methods for providing support must be found. This thesis also studies how this support could be taken into account in course arrangements. The methods used include, for example, languaging and scaffolding, and continuous feedback. The analysis revealed that allocating resources based on the predicted progress does not increase costs or lower the results of better students. Instead, it will help weaker students obtain passing grades.
Resumo:
Post-testicular sperm maturation occurs in the epididymis. The ion concentration and proteins secreted into the epididymal lumen, together with testicular factors, are believed to be responsible for the maturation of spermatozoa. Disruption of the maturation of spermatozoa in the epididymis provides a promising strategy for generating a male contraceptive. However, little is known about the proteins involved. For drug development, it is also essential to have tools to study the function of these proteins in vitro. One approach for screening novel targets is to study the secretory products of the epididymis or the G protein-coupled receptors (GPCRs) that are involved in the maturation process of the spermatozoa. The modified Ca2+ imaging technique to monitor release from PC12 pheochromocytoma cells can also be applied to monitor secretory products involved in the maturational processes of spermatozoa. PC12 pheochromocytoma cells were chosen for evaluation of this technique as they release catecholamines from their cell body, thus behaving like endocrine secretory cells. The results of the study demonstrate that depolarisation of nerve growth factor -differentiated PC12 cells releases factors which activate nearby randomly distributed HEL erythroleukemia cells. Thus, during the release process, the ligands reach concentrations high enough to activate receptors even in cells some distance from the release site. This suggests that communication between randomly dispersed cells is possible even if the actual quantities of transmitter released are extremely small. The development of a novel method to analyse GPCR-dependent Ca2+ signalling in living slices of mouse caput epididymis is an additional tool for screening for drug targets. By this technique it was possible to analyse functional GPCRs in the epithelial cells of the ductus epididymis. The results revealed that, both P2X- and P2Y-type purinergic receptors are responsible for the rapid and transient Ca2+ signal detected in the epithelial cells of caput epididymides. Immunohistochemical and reverse transcriptase-polymerase chain reaction (RTPCR) analyses showed the expression of at least P2X1, P2X2, P2X4 and P2X7, and P2Y1 and P2Y2 receptors in the epididymis. Searching for epididymis-specific promoters for transgene delivery into the epididymis is of key importance for the development of specific models for drug development. We used EGFP as the reporter gene to identify proper promoters to deliver transgenes into the epithelial cells of the mouse epididymis in vivo. Our results revealed that the 5.0 kb murine Glutathione peroxidase 5 (GPX5) promoter can be used to target transgene expression into the epididymis while the 3.8 kb Cysteine-rich secretory protein-1 (CRISP-1) promoter can be used to target transgene expression into the testis. Although the visualisation of EGFP in living cells in culture usually poses few problems, the detection of EGFP in tissue sections can be more difficult because soluble EGFP molecules can be lost if the cell membrane is damaged by freezing, sectioning, or permeabilisation. Furthermore, the fluorescence of EGFP is dependent on its conformation. Therefore, fixation protocols that immobilise EGFP may also destroy its usefulness as a fluorescent reporter. We therefore developed a novel tissue preparation and preservation techniques for EGFP. In addition, fluorescence spectrophotometry with epididymal epithelial cells in suspension revealed the expression of functional purinergic, adrenergic, cholinergic and bradykinin receptors in these cell lines (mE-Cap27 and mE-Cap28). In conclusion, we developed new tools for studying the role of the epididymis in sperm maturation. We developed a new technique to analyse GPCR dependent Ca2+ signalling in living slices of mouse caput epididymis. In addition, we improved the method of detecting reporter gene expression. Furthermore, we characterised two epididymis-specific gene promoters, analysed the expression of GPCRs in epididymal epithelial cells and developed a novel technique for measurement of secretion from cells.
Resumo:
Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.