965 resultados para Linear boundary value control problems
Resumo:
Based on a review of the servant leadership, well-being, and performance literatures, the first study develops a research model that examines how and under which conditions servant leadership is related to follower performance and well-being alike. Data was collected from 33 leaders and 86 of their followers working in six organizations. Multilevel moderated mediation analyses revealed that servant leadership was indeed related to eudaimonic well-being and lead-er-rated performance via followers’ positive psychological capital, but that the strength and di-rection of the examined relationships depended on organizational policies and practices promot-ing employee health, and in the case of follower performance on a developmental team climate, shedding light on the importance of the context in which servant leadership takes place. In addi-tion, two more research questions resulted from a review of the training literature, namely how and under which conditions servant leadership can be trained, and whether follower performance and well-being follow from servant leadership enhanced by training. We subsequently designed a servant leadership training and conducted a longitudinal field experiment to examine our sec-ond research question. Analyses were based on data from 38 leaders randomly assigned to a training or control condition, and 91 of their followers in 36 teams. Hierarchical linear modeling results showed that the training, which addressed the knowledge of, attitudes towards, and ability to apply servant leadership, positively affected leader and follower perceptions of servant leader-ship, but in the latter case only when leaders strongly identified with their team. These findings provide causal evidence as to how and when servant leadership can be effectively developed. Fi-nally, the research model of Study 1 was replicated in a third study based on 58 followers in 32 teams drawn from the same population used for Study 2, confirming that follower eudaimonic well-being and leader-rated performance follow from developing servant leadership via increases in psychological capital, and thus establishing the directionality of the examined relationships.
Resumo:
Mind ez ideig a gyakorlatban kevéssé aknázták ki azt a lehetőséget, hogy a természeti tőke pénzbeli értékelése számszerű, egzakt információt nyújthat a döntéshozóknak. A szerzők a zajvédelmi intézkedések példáján keresztül tekintik át a természeti tőkejavak közgazdasági értékelésében rejlő lehetőségeket. Ismertetik a költség-haszon elemzés környezeti javakkal bővített formájának előnyeit, majd az általában elhanyagolt, a nem piaci javak által nyújtott haszon közgazdasági értékelésére alkalmas eljárásokat, külön kitérve a zajterhelés területére. Nagy hangsúlyt helyeznek a haszonfelmérések átvitelének széles körben alkalmazható módszereire. Bemutatják az általuk gyakorlatban végzett kutatás során szerzett tapasztalatokat, különös tekintettel arra, hogy a haszonértékelések átvitele hogyan járulhat hozzá a természeti tőkejavakkal kapcsolatos döntések során a társadalmi haszon maximalizálásához. _____ The paper offers an overview of the economic valuation of transportation-induced noise and cost-benefit analysis of noise-control measures and actions. Although economic valuation can provide hard, monetized data for decision-makers, it is relatively underused in practice. The study focuses on benefit-transfer methodology, where values obtained in previous cases are used as the basis for current evaluation. A specific application of benefit transfer is presented by a recent pilot project in Hungary, whereby a tool was developed for LGOs, enabling them to make preliminary assessments of the benefits of potential noise-control measures and rank possible options. This can help to optimize the benefits to society using limited resources.
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
In the discussion - Ethics, Value Systems And The Professionalization Of Hoteliers by K. Michael Haywood, Associate Professor, School of Hotel and Food Administration, University of Guelph, Haywood initially presents: “Hoteliers and executives in other service industries should realize that the foundation of success in their businesses is based upon personal and corporate value systems and steady commitment to excellence. The author illustrates how ethical issues and manager morality are linked to, and shaped by the values of executives and the organization, and how improved professionalism can only be achieved through the adoption of a value system that rewards contributions rather than the mere attainment of results.” The bottom line of this discussion is, how does the hotel industry reconcile its behavior with that of public perception? “The time has come for hoteliers to examine their own standards of ethics, value systems, and professionalism,” Haywood says. And it is ethics that are at the center of this issue; Haywood holds that component in an estimable position. “Hoteliers must become value-driven,” advises Haywood. “They must be committed to excellence both in actualizing their best potentialities and in excelling in all they do. In other words, the professionalization of the hotelier can be achieved through a high degree of self-control, internalized values, codes of ethics, and related socialization processes,” he expands. “Serious ethical issues exist for hoteliers as well as for many business people and professionals in positions of responsibility,” Haywood alludes in defining some inter-industry problems. “The acceptance of kickbacks and gifts from suppliers, the hiding of income from taxation authorities, the lack of interest in installing and maintaining proper safety and security systems, and the raiding of competitors' staffs are common practices,” he offers, with the reasoning that if these problems can occur within ranks, then there is going to be a negative backlash in the public/client arena as well. Haywood divides the key principles of his thesis statement - ethics, value systems, and professionalism – into specific elements, and then continues to broaden the scope of each element. Promotion, product/service, and pricing are additional key components in Haywood’s discussion, and he addresses each with verve and vitality. Haywood references the four character types - craftsmen, jungle fighters, company men, and gamesmen – via a citation to Michael Maccoby, in the portion of the discussion dedicated to morality and success. Haywood closes with a series of questions derived from Lawrence Miller's American Spirit, Visions of a New Corporate Culture, each question designed to focus, shape, and organize management's attention to the values that Miller sets forth in his piece.
Resumo:
The main focus of this research is to design and develop a high performance linear actuator based on a four bar mechanism. The present work includes the detailed analysis (kinematics and dynamics), design, implementation and experimental validation of the newly designed actuator. High performance is characterized by the acceleration of the actuator end effector. The principle of the newly designed actuator is to network the four bar rhombus configuration (where some bars are extended to form an X shape) to attain high acceleration. Firstly, a detailed kinematic analysis of the actuator is presented and kinematic performance is evaluated through MATLAB simulations. A dynamic equation of the actuator is achieved by using the Lagrangian dynamic formulation. A SIMULINK control model of the actuator is developed using the dynamic equation. In addition, Bond Graph methodology is presented for the dynamic simulation. The Bond Graph model comprises individual component modeling of the actuator along with control. Required torque was simulated using the Bond Graph model. Results indicate that, high acceleration (around 20g) can be achieved with modest (3 N-m or less) torque input. A practical prototype of the actuator is designed using SOLIDWORKS and then produced to verify the proof of concept. The design goal was to achieve the peak acceleration of more than 10g at the middle point of the travel length, when the end effector travels the stroke length (around 1 m). The actuator is primarily designed to operate in standalone condition and later to use it in the 3RPR parallel robot. A DC motor is used to operate the actuator. A quadrature encoder is attached with the DC motor to control the end effector. The associated control scheme of the actuator is analyzed and integrated with the physical prototype. From standalone experimentation of the actuator, around 17g acceleration was achieved by the end effector (stroke length was 0.2m to 0.78m). Results indicate that the developed dynamic model results are in good agreement. Finally, a Design of Experiment (DOE) based statistical approach is also introduced to identify the parametric combination that yields the greatest performance. Data are collected by using the Bond Graph model. This approach is helpful in designing the actuator without much complexity.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Increased complexity in large design and manufacturing organisations requires improvements at the operations management (OM)–applied service (AS) interface areas to improve project effectiveness. The aim of this paper is explore the role of Lean in improving the longitudinal efficiency of the OM–AS interface within a large aerospace organisation using Lean principles and boundary spanning theory. The methodology was an exploratory longitudinal case approach including exploratory interviews (n = 21), focus groups (n = 2), facilitated action-research workshops (n = 2) and two trials or experiments using longitudinal data involving both OM and AS personnel working at the interface. The findings draw upon Lean principles and boundary spanning theory to guide and interpret the findings. It was found that misinterpretation, and forced implementation, of OM-based Lean terminology and practice in the OM–AS interface space led to delays and misplaced resources. Rather both OM and AS staff were challenged to develop a cross boundary understanding of Lean-based boundary (knowledge) objects in interpreting OM requests. The longitudinal findings from the experiments showed that the development of Lean Performance measurements and lean Value Stream constructs was more successful when these Lean constructs were treated as boundary (knowledge) objects requiring transformation over time to orchestrate improved effectiveness and in leading to consistent terminology and understanding between the OM–AS boundary spanning team.
Resumo:
Abstract not available
Resumo:
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transfer problem on the circle with the transportation cost being the squared Euclidean distance. We find the exact solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transfer problems on the circle.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Inverse heat conduction problems (IHCPs) appear in many important scientific and technological fields. Hence analysis, design, implementation and testing of inverse algorithms are also of great scientific and technological interest. The numerical simulation of 2-D and –D inverse (or even direct) problems involves a considerable amount of computation. Therefore, the investigation and exploitation of parallel properties of such algorithms are equally becoming very important. Domain decomposition (DD) methods are widely used to solve large scale engineering problems and to exploit their inherent ability for the solution of such problems.
Resumo:
Alkali tantalates and niobates, including K(Ta / Nb)O3, Li(Ta / Nb)O3 and Na(Ta / Nb)O3, are a very promising ferroic family of lead-free compounds with perovskite-like structures. Their versatile properties make them potentially interesting for current and future application in microelectronics, photocatalysis, energy and biomedics. Among them potassium tantalate, KTaO3 (KTO), has been raising interest as an alternative for the well-known strontium titanate, SrTiO3 (STO). KTO is a perovskite oxide with a quantum paraelectric behaviour when electrically stimulated and a highly polarizable lattice, giving opportunity to tailor its properties via external or internal stimuli. However problems related with the fabrication of either bulk or 2D nanostructures makes KTO not yet a viable alternative to STO. Within this context and to contribute scientifically to the leverage tantalate based compounds applications, the main goals of this thesis are: i) to produce and characterise thin films of alkali tantalates by chemical solution deposition on rigid Si based substrates, at reduced temperatures to be compatible with Si technology, ii) to fulfil scientific knowledge gaps in these relevant functional materials related to their energetics and ii) to exploit alternative applications for alkali tantalates, as photocatalysis. In what concerns the synthesis attention was given to the understanding of the phase formation in potassium tantalate synthesized via distinct routes, to control the crystallization of desired perovskite structure and to avoid low temperature pyrochlore or K-deficient phases. The phase formation process in alkali tantalates is far from being deeply analysed, as in the case of Pb-containing perovskites, therefore the work was initially focused on the process-phase relationship to identify the driving forces responsible to regulate the synthesis. Comparison of phase formation paths in conventional solid-state reaction and sol-gel method was conducted. The structural analyses revealed that intermediate pyrochlore K2Ta2O6 structure is not formed at any stage of the reaction using conventional solid-state reaction. On the other hand in the solution based processes, as alkoxide-based route, the crystallization of the perovskite occurs through the intermediate pyrochlore phase; at low temperatures pyrochlore is dominant and it is transformed to perovskite at >800 °C. The kinetic analysis carried out by using Johnson-MehlAvrami-Kolmogorow model and quantitative X-ray diffraction (XRD) demonstrated that in sol-gel derived powders the crystallization occurs in two stages: i) at early stage of the reaction dominated by primary nucleation, the mechanism is phase-boundary controlled, and ii) at the second stage the low value of Avrami exponent, n ~ 0.3, does not follow any reported category, thus not permitting an easy identification of the mechanism. Then, in collaboration with Prof. Alexandra Navrotsky group from the University of California at Davis (USA), thermodynamic studies were conducted, using high temperature oxide melt solution calorimetry. The enthalpies of formation of three structures: pyrochlore, perovskite and tetragonal tungsten bronze K6Ta10.8O30 (TTB) were calculated. The enthalpies of formation from corresponding oxides, ∆Hfox, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -203.63 ± 2.84 kJ/mol, - 358.02 ± 3.74 kJ/mol, and -1252.34 ± 10.10 kJ/mol, respectively, whereas from elements, ∆Hfel, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -1408.96 ± 3.73 kJ/mol, -2790.82 ± 6.06 kJ/mol, and -13393.04 ± 31.15 kJ/mol, respectively. The possible decomposition reactions of K-deficient KTa2.2O6 pyrochlore to KTaO3 perovskite and Ta2O5 (reaction 1) or to TTB K6Ta10.8O30 and Ta2O5 (reaction 2) were proposed, and the enthalpies were calculated to be 308.79 ± 4.41 kJ/mol and 895.79 ± 8.64 kJ/mol for reaction 1 and reaction 2, respectively. The reactions are strongly endothermic, indicating that these decompositions are energetically unfavourable, since it is unlikely that any entropy term could override such a large positive enthalpy. The energetic studies prove that pyrochlore is energetically more stable phase than perovskite at low temperature. Thus, the local order of the amorphous precipitates drives the crystallization into the most favourable structure that is the pyrochlore one with similar local organization; the distance between nearest neighbours in the amorphous or short-range ordered phase is very close to that in pyrochlore. Taking into account the stoichiometric deviation in KTO system, the selection of the most appropriate fabrication / deposition technique in thin films technology is a key issue, especially concerning complex ferroelectric oxides. Chemical solution deposition has been widely reported as a processing method to growth KTO thin films, but classical alkoxide route allows to crystallize perovskite phase at temperatures >800 °C, while the temperature endurance of platinized Si wafers is ~700 °C. Therefore, alternative diol-based routes, with distinct potassium carboxylate precursors, was developed aiming to stabilize the precursor solution, to avoid using toxic solvents and to decrease the crystallization temperature of the perovskite phase. Studies on powders revealed that in the case of KTOac (solution based on potassium acetate), a mixture of perovskite and pyrochlore phases is detected at temperature as low as 450 °C, and gradual transformation into monophasic perovskite structure occurs as temperature increases up to 750 °C, however the desired monophasic KTaO3 perovskite phase is not achieved. In the case of KTOacac (solution with potassium acetylacetonate), a broad peak is detected at temperatures <650 °C, characteristic of amorphous structures, while at higher temperatures diffraction lines from pyrochlore and perovskite phases are visible and a monophasic perovskite KTaO3 is formed at >700 °C. Infrared analysis indicated that the differences are due to a strong deformation of the carbonate-based structures upon heating. A series of thin films of alkali tantalates were spin-coated onto Si-based substrates using diol-based routes. Interestingly, monophasic perovskite KTaO3 films deposited using KTOacac solution were obtained at temperature as low as 650 °C; films were annealed in rapid thermal furnace in oxygen atmosphere for 5 min with heating rate 30 °C/sec. Other compositions of the tantalum based system as LiTaO3 (LTO) and NaTaO3 (NTO), were successfully derived as well, onto Si substrates at 650 °C as well. The ferroelectric character of LTO at room temperature was proved. Some of dielectric properties of KTO could not be measured in parallel capacitor configuration due to either substrate-film or filmelectrode interfaces. Thus, further studies have to be conducted to overcome this issue. Application-oriented studies have also been conducted; two case studies: i) photocatalytic activity of alkali tantalates and niobates for decomposition of pollutant, and ii) bioactivity of alkali tantalate ferroelectric films as functional coatings for bone regeneration. Much attention has been recently paid to develop new type of photocatalytic materials, and tantalum and niobium oxide based compositions have demonstrated to be active photocatalysts for water splitting due to high potential of the conduction bands. Thus, various powders of alkali tantalates and niobates families were tested as catalysts for methylene blue degradation. Results showed promising activities for some of the tested compounds, and KNbO3 is the most active among them, reaching over 50 % degradation of the dye after 7 h under UVA exposure. However further modifications of powders can improve the performance. In the context of bone regeneration, it is important to have platforms that with appropriate stimuli can support the attachment and direct the growth, proliferation and differentiation of the cells. In lieu of this here we exploited an alternative strategy for bone implants or repairs, based on charged mediating signals for bone regeneration. This strategy includes coating metallic 316L-type stainless steel (316L-SST) substrates with charged, functionalized via electrical charging or UV-light irradiation, ferroelectric LiTaO3 layers. It was demonstrated that the formation of surface calcium phosphates and protein adsorption is considerably enhanced for 316L-SST functionalized ferroelectric coatings. Our approach can be viewed as a set of guidelines for the development of platforms electrically functionalized that can stimulate tissue regeneration promoting direct integration of the implant in the host tissue by bone ingrowth and, hence contributing ultimately to reduce implant failure.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.