911 resultados para ENGINEERING ANALYSIS
Resumo:
Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Earlier management studies have found a relationship between managerial qualities and subordinate impacts, but the effect of managers‘ social competence on leader perceptions has not been solidly established. To fill the related research gap, the present work embarks on a quantitative empirical effort to identify predictors of successful leadership. In particular, this study investigates relationships between perceived leader behavior and three selfreport instruments used to measure managerial capability: 1) the WOPI Work Personality Inventory, 2) Raven‘s general intelligence scale, and 3) the Emotive Communication Scale (ECS). This work complements previous research by resorting to both self-reports and other-reports: the results acquired from the managerial sample are compared to subordinate perceptions as measured through the ECS other-report and the WOPI360 multi-source appraisal. The quantitative research is comprised of a sample of 8o superiors and 354 subordinates operating in eight Finnish organizations. The strongest predictive value emerged from the ECS self- and other-reports and certain personality dimensions. In contrast, supervisors‘ logical intelligence did not correlate with leadership perceived as socially competent by subordinates. 16 of the superiors rated as most socially competent by their subordinates were selected for case analysis. Their qualitative narratives evidence the role of life history and post-traumatic growth in developing managerial skills. The results contribute to leadership theory in four ways. First, the ECS self-report devised for this research offers a reliable scale for predicting socially competent leader ability. Second, the work identifies dimensions of personality and emotive skills that can be considered predictors of managerial ability and benefited from in leader recruitment and career planning. Third, the Emotive Communication Model delineated on the basis of the empirical data allows for a systematic design and planning of communication and leadership education. Fourth, this workfurthers understanding of personal growth strategies and the role of life history in leader development and training. Finally, this research advances educational leadership by conceptualizing and operationalizing effective managerial communications. The Emotive Communication Model devised directs the pedagogic attention in engineering to assertion, emotional availability and inspiration skills. The proposed methodology addresses classroom management strategies drawing from problem-based learning, student empowerment, collaborative learning, and so-called socially competent teachership founded on teacher immediacy and perceived caring, all constituting strategies moving away from student compliance and teacher modelling. The ultimate educational objective embraces the development of individual engineers and organizational leaders that not only possess traditional analytical and technical expertise and substantive knowledge but are intelligent also creatively, practically, and socially.
Resumo:
Nitrate is the main form of nitrogen associated with water contamination; the high mobility of this species in soil justifies the concern regarding nitrogen management in agricultural soils. Therefore, the objective of this research was to assess the effect of companion cation on nitrate displacement, by analyzing nitrate transport parameters through Breakthrough Curves (BTCs) and their settings made by numerical model (STANMOD). The experiment was carried out in the Soil and Water Quality Laboratory of the Department of Biosystems Engineering, "Luiz de Queiroz" College of Agriculture in Piracicaba (SP), Brazil. It was performed using saturated soil columns in steady-state flow condition, in which two different sources of inorganic nitrate Ca(NO3)2 and NH4NO3 were applied at a concentration of 50 mg L-1 NO3-. Each column was filled with either a Red-Yellow Oxisol (S1) or an Alfisol (S2). Results are indicative that the companion ion had no effect on nitrate displacement. However, nitrate transport was influenced by soil texture, particle aggregation, solution speed in soil and organic matter presence. Nitrate mobility was higher in the Alfisol (S2).
Resumo:
Transportation of fluids is one of the most common and energy intensive processes in the industrial and HVAC sectors. Pumping systems are frequently subject to engineering malpractice when dimensioned, which can lead to poor operational efficiency. Moreover, pump monitoring requires dedicated measuring equipment, which imply costly investments. Inefficient pump operation and improper maintenance can increase energy costs substantially and even lead to pump failure. A centrifugal pump is commonly driven by an induction motor. Driving the induction motor with a frequency converter can diminish energy consumption in pump drives and provide better control of a process. In addition, induction machine signals can also be estimated by modern frequency converters, dispensing with the use of sensors. If the estimates are accurate enough, a pump can be modelled and integrated into the frequency converter control scheme. This can open the possibility of joint motor and pump monitoring and diagnostics, thereby allowing the detection of reliability-reducing operating states that can lead to additional maintenance costs. The goal of this work is to study the accuracy of rotational speed, torque and shaft power estimates calculated by a frequency converter. Laboratory tests were performed in order to observe estimate behaviour in both steady-state and transient operation. An induction machine driven by a vector-controlled frequency converter, coupled with another induction machine acting as load was used in the tests. The estimated quantities were obtained through the frequency converter’s Trend Recorder software. A high-precision, HBM T12 torque-speed transducer was used to measure the actual values of the aforementioned variables. The effect of the flux optimization energy saving feature on the estimate quality was also studied. A processing function was developed in MATLAB for comparison of the obtained data. The obtained results confirm the suitability of this particular converter to provide accurate enough estimates for pumping applications.
Resumo:
Middle section module of InnoTrackTM moving walk was re-engineered according to value analysis process. Self-supporting steel structure for moving walk was created as a result of this process. Designed structure was verified and validated by prototype tests and finite element method calculations. Self-supporting steel structure replaces the original design of middle section module in InnoTrackTM. Designed structure provides higher satisfaction to customers’ needs and at the same time, it uses less resources. The redesigned middle section module provides higher value to the customer.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The world’s population is growing at a rapid rate and one of the primary problems of a growing is food supply. To ensure food supply and security, the biggest companies in the agricultural sector of the United States and all over the world have collaborated to produce genetically modified organisms, including crops, that have a tendency to increase yields and are speculated to reduce pesticide use. It’s a technology that is declared to have a multitude of benefits. During the same time period another set of practices has risen to the horizon by the name of agroecology. It spreads across many different sectors such as politics, sociology, environment, health and so on. Moreover, it involves primitive organic techniques that can be applied at farm level to enhance the performance of an ecosystem to effectively decrease the negative effect on environment and health of individuals while producing good quality foods. Since both the processes proclaim sustainable development, a natural question may come in mind that which one seems more favorable? During the course of this study, genetically modified organisms (GMOs) and agroecology are compared within the sphere of social, environmental and health aspects. The results derived upon a comparative analysis of scientific literature tend to prove that GMOs pose a greater threat to the environment, health of individuals and the generalized social balance in the United States compared to agroecological practices. Economic indicators were not included in the study and more studies might be needed in the future to get a broader view on the subject.
Resumo:
The aim of this research was to develop a piping stress analysis guideline to be widely used in Neste Jacobs Oy’s domestic and foreign projects. The company’s former guideline to performing stress analysis was partial and lacked important features, which were to be fixed through this research. The development of the guideline was based on literature research and gathering of existing knowledge from the experts in piping engineering. Case study method was utilized by performing stress analysis on an existing project with help of the new guideline. Piping components, piping engineering in process industry, and piping stress analysis were studied in the theory section of this research. Also, the existing piping standards were studied and compared with one another. By utilizing the theory found in literature and the vast experience and know-how collected from the company’s employees, a new guideline for stress analysis was developed. The guideline would be widely used in various projects. The purpose of the guideline was to clarify certain issues such as which of the piping would have to be analyzed, how are different material values determined and how will the results be reported. As a result, an extensive and comprehensive guideline for stress analysis was created. The new guideline more clearly defines formerly unclear points and creates clear parameters to performing calculations. The guideline is meant to be used by both new and experienced analysts and with its aid, the calculation process was unified throughout the whole company’s organization. Case study was used to exhibit how the guideline is utilized in practice, and how it benefits the calculation process.
Resumo:
Gravitational phase separation is a common unit operation found in most large-scale chemical processes. The need for phase separation can arise e.g. from product purification or protection of downstream equipment. In gravitational phase separation, the phases separate without the application of an external force. This is achieved in vessels where the flow velocity is lowered substantially compared to pipe flow. If the velocity is low enough, the denser phase settles towards the bottom of the vessel while the lighter phase rises. To find optimal configurations for gravitational phase separator vessels, several different geometrical and internal design features were evaluated based on simulations using OpenFOAM computational fluid dynamics (CFD) software. The studied features included inlet distributors, vessel dimensions, demister configurations and gas phase outlet configurations. Simulations were conducted as single phase steady state calculations. For comparison, additional simulations were performed as dynamic single and two-phase calculations. The steady state single phase calculations provided indications on preferred configurations for most above mentioned features. The results of the dynamic simulations supported the utilization of the computationally faster steady state model as a practical engineering tool. However, the two-phase model provides more truthful results especially with flows where a single phase does not determine the flow characteristics.
Resumo:
The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
The production of biodiesel through transesterification has created a surplus of glycerol on the international market. In few years, glycerol has become an inexpensive and abundant raw material, subject to numerous plausible valorisation strategies. Glycerol hydrochlorination stands out as an economically attractive alternative to the production of biobased epichlorohydrin, an important raw material for the manufacturing of epoxy resins and plasticizers. Glycerol hydrochlorination using gaseous hydrogen chloride (HCl) was studied from a reaction engineering viewpoint. Firstly, a more general and rigorous kinetic model was derived based on a consistent reaction mechanism proposed in the literature. The model was validated with experimental data reported in the literature as well as with new data of our own. Semi-batch experiments were conducted in which the influence of the stirring speed, HCl partial pressure, catalyst concentration and temperature were thoroughly analysed and discussed. Acetic acid was used as a homogeneous catalyst for the experiments. For the first time, it was demonstrated that the liquid-phase volume undergoes a significant increase due to the accumulation of HCl in the liquid phase. Novel and relevant features concerning hydrochlorination kinetics, HCl solubility and mass transfer were investigated. An extended reaction mechanism was proposed and a new kinetic model was derived. The model was tested with the experimental data by means of regression analysis, in which kinetic and mass transfer parameters were successfully estimated. A dimensionless number, called Catalyst Modulus, was proposed as a tool for corroborating the kinetic model. Reactive flash distillation experiments were conducted to check the commonly accepted hypothesis that removal of water should enhance the glycerol hydrochlorination kinetics. The performance of the reactive flash distillation experiments were compared to the semi-batch data previously obtained. An unforeseen effect was observed once the water was let to be stripped out from the liquid phase, exposing a strong correlation between the HCl liquid uptake and the presence of water in the system. Water has revealed to play an important role also in the HCl dissociation: as water was removed, the dissociation of HCl was diminished, which had a retarding effect on the reaction kinetics. In order to obtain a further insight on the influence of water on the hydrochlorination reaction, extra semi-batch experiments were conducted in which initial amounts of water and the desired product were added. This study revealed the possibility to use the desired product as an ideal “solvent” for the glycerol hydrochlorination process. A co-current bubble column was used to investigate the glycerol hydrochlorination process under continuous operation. The influence of liquid flow rate, gas flow rate, temperature and catalyst concentration on the glycerol conversion and product distribution was studied. The fluid dynamics of the system showed a remarkable behaviour, which was carefully investigated and described. Highspeed camera images and residence time distribution experiments were conducted to collect relevant information about the flow conditions inside the tube. A model based on the axial dispersion concept was proposed and confronted with the experimental data. The kinetic and solubility parameters estimated from the semi-batch experiments were successfully used in the description of mass transfer and fluid dynamics of the bubble column reactor. In light of the results brought by the present work, the glycerol hydrochlorination reaction mechanism has been finally clarified. It has been demonstrated that the reactive distillation technology may cause drawbacks to the glycerol hydrochlorination reaction rate under certain conditions. Furthermore, continuous reactor technology showed a high selectivity towards monochlorohydrins, whilst semibatch technology was demonstrated to be more efficient towards the production of dichlorohydrins. Based on the novel and revealing discoveries brought by the present work, many insightful suggestions are made towards the improvement of the production of αγ-dichlorohydrin on an industrial scale.
Resumo:
This thesis research was a qualitative case study of a single class of Interdisciplinary Studies: Introduction to Engineering taught in a secondary school. The study endeavoured to explore students' experiences in and perceptions of the course, and to investigate the viability of engineering as an interdisciplinary theme at the secondary school level. Data were collected in the form of student questionnaires, the researcher's observations and reflections, and artefacts representative of students' work. Data analysis was performed by coding textual data and classifying text segments into common themes. The themes that emerged from the data were aligned with facets of interdisciplinary study, including making connections, project-based learning, and student engagement and affective outcomes. The findings of the study showed that students were positive about their experiences in the course, and enjoyed its project-driven nature. Content from mathematics, physics, and technological design was easily integrated under the umbrella of engineering. Students felt that the opportunity to develop problem solving and teamwork skills were two of the most important aspects of the course and could be relevant not only for engineering, but for other disciplines or their day-to-day lives after secondary school. The study concluded that engineering education in secondary school can be a worthwhile experience for a variety of students and not just those intending postsecondary study in engineering. This has implications for the inclusion of engineering in the secondary school curriculum and can inform the practice of curriculum planners at the school, school board, and provincial levels. Suggested directions for further research include classroom-based action research in the areas of technological education, engineering education in secondary school, and interdisciplinary education.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal