923 resultados para series-parallel model
Resumo:
The work described in this thesis revolves around the 1,1,n,ntetramethyl[n](2,11)teropyrenophanes, which are a series of [n]cyclophanes with a severely bent, board-shaped polynuclear aromatic hydrocarbons (PAH). The thesis is divided into seven Chapters. The first Chapter conatins an overview of the seminal work on [n]cyclophanes of the first two members of the “capped rylene” series of PAHs: benzene and pyrene. Three different general strategies for the synthesis of [n]cyclophanes are discussed and this leads in to a discussion of some slected syntheses of [n]paracyclopahnes and [n](2,7)pyrenophanes. The chemical, structural, spectroscopic and photophysical properties of these benzene and pyrene-derived cyclophanes are discussed with emphasis on the changes that occur with changes in the structure of the aromatic system. Chapter 1 concludes with a brief introduction to [n]cyclophanes of the fourth member of the capped rylene series of PAHs: teropyrene. The focus of the work described in Chapter 2 is the synthesis of of 1,1,n,ntetramethyl[n](2,11)teropyrenophane (n = 6 and 7) using a double-McMurry strategy. While the synthesis 1,1,7,7-tetramethyl[7](2,11)teropyrenophane was successful, the synthesis of the lower homologue 1,1,6,6-tetramethyl[6](2,11)teropyrenophane was not. The conformational behaviour of [n.2]pyrenophanes was also studied by 1H NMR spectroscopy and this provided a conformation-based rationale for the failure of the synthesis of 1,1,6,6-tetramethyl[6](2,11)teropyrenophane. Chapter 3 contains details of the synthesis of 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7-9) using a Wurtz / McMurry strategy, which proved to be more general than the double McMurry strategy. The three teropyrenophanes were obtained in ca. 10 milligram quantities. Trends in the spectroscopic properties that accompany changes in the structure of the teropyrene system are discussed. A violation of Kasha’s rule was observed when the teropyrenophanes were irradiated at 260 nm. The work described in the fourth Chapter concentrates on the development of gram-scale syntheses of 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7–10) using the Wurtz / McMurry strategy. Several major modifications to the orginal synthetic pathway had to be made to enable the first several steps to be performed comfortably on tens of grams of material. Solubility problems severely limited the amount of material that could be produced at a late stage of the synthetic pathways leading to the evennumbered members of the series (n = 8, 10). Ultimately, only 1,1,9,9- tetramethyl[9](2,11)teropyrenophane was synthesized on a multi-gram scale. In the final step in the synthesis, a valence isomerization / dehydrogenation (VID) reaction, the teropyrenophane was observed to become unstable under the conditions of its formation at n = 8. The synthesis of 1,1,10,10-tetramethyl[10](2,11)teropyrenophane was achieved for the first time, but only on a few hundred milligram scale. In Chapter 5, the results of an investigation of the electrophilic aromatic bromination of the 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7–10) are presented. Being the most abundant cyclophane, most of the work was performed on 1,1,9,9-tetramethyl[9](2,11)teropyrenophane. Reaction of this compound with varying amounts of of bromine revealed that bromination occurs most rapidly at the symmetryrelated 4, 9, 13 and 18 positions (teropyrene numbering) and that the 4,9,13,18- tetrabromide could be formed exclusively. Subsequent bromination occurs selectively on the symmetry-related 6, 7, 15 and 16 positions (teropyrene numbering), but considerably more slowly. Only mixtures of penta-, hexa-, hepta and octabromides could be formed. Bromination reactions of the higher and lower homologues (n = 7, 8 and 10) revealed that the reactivity of the teropyrene system increased with the degree of bend. Crystal structures of some tetra-, hexa-, hepta- and octa-brominated products were obtained. The goal of the work described in Chapter 6 is to use 1,1,9,9- tetramethyl[9](2,11)teropyrenophane as a starting material for the synthesis of warped nanographenophanes. A bromination, Suzuki-Miyaura, cyclodehydrogenation sequence was unsuccessful, as was a C–H arylation / cyclodehydrogenation approach. Itami’s recently-developed K-region-selective annulative -extension (APEX) reaction proved to be successful, affording a giant [n]cyclophane with a C84 PAH. Attempted bay-region Diels-Alder reactions and some cursory host-guest chemistry of teropyrenophanes are also discussed. In Chapter 7 a synthetic approach toward a planar model compound, 2,11-di-tbutylteropyrene, is described. The synthesis could not be completed owing to solubility problems at the end of the synthetic pathway.
Resumo:
The main focus of this research is to design and develop a high performance linear actuator based on a four bar mechanism. The present work includes the detailed analysis (kinematics and dynamics), design, implementation and experimental validation of the newly designed actuator. High performance is characterized by the acceleration of the actuator end effector. The principle of the newly designed actuator is to network the four bar rhombus configuration (where some bars are extended to form an X shape) to attain high acceleration. Firstly, a detailed kinematic analysis of the actuator is presented and kinematic performance is evaluated through MATLAB simulations. A dynamic equation of the actuator is achieved by using the Lagrangian dynamic formulation. A SIMULINK control model of the actuator is developed using the dynamic equation. In addition, Bond Graph methodology is presented for the dynamic simulation. The Bond Graph model comprises individual component modeling of the actuator along with control. Required torque was simulated using the Bond Graph model. Results indicate that, high acceleration (around 20g) can be achieved with modest (3 N-m or less) torque input. A practical prototype of the actuator is designed using SOLIDWORKS and then produced to verify the proof of concept. The design goal was to achieve the peak acceleration of more than 10g at the middle point of the travel length, when the end effector travels the stroke length (around 1 m). The actuator is primarily designed to operate in standalone condition and later to use it in the 3RPR parallel robot. A DC motor is used to operate the actuator. A quadrature encoder is attached with the DC motor to control the end effector. The associated control scheme of the actuator is analyzed and integrated with the physical prototype. From standalone experimentation of the actuator, around 17g acceleration was achieved by the end effector (stroke length was 0.2m to 0.78m). Results indicate that the developed dynamic model results are in good agreement. Finally, a Design of Experiment (DOE) based statistical approach is also introduced to identify the parametric combination that yields the greatest performance. Data are collected by using the Bond Graph model. This approach is helpful in designing the actuator without much complexity.
Resumo:
Introduction: This case study documented the experiences of informal and service providers who participated in the first time delivery of the First Link Learning Series from May–August 2013 in Newfoundland and Labrador. The aim of this study was to understand how informal caregivers of people with dementia experience this Internet mediated health resource, and how Skype and YouTube can be used as tools for the Alzheimer Society of Newfoundland and Labrador to effectively deliver the First Link Learning Series. Methods: Sources of data included key informant interviews (n=3), pre- study and post-study interviews with informal dementia caregivers (n=2), institutional documentation, field notes, and YouTube analytics. Framework Analysis was used to make meaning of the qualitative data, and descriptive statistics were used to report on quantitative outcomes. Findings: Between 3% and 17% of registered First Link clients attended the learning series sessions, however only two caregivers participated using Skype or YouTube. Framework Analysis revealed three shared themes: access, connection and privacy. Discussion: The themes helped to begin building theory about barriers and facilitators to Internet mediated health resources for informal dementia caregivers. Experiences of service providers using the Internet to support clients served to begin building a case for the appropriateness of these media. A modified version of Dansky et al.’s (2006) theoretical framework for evaluating E-Health research that situates the person/user in the model, helped guide discussion and propose future directions for the study of Internet based health resources for informal dementia caregivers.
Resumo:
Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Drying kinetic analysis of municipal solid waste using modified page model and pattern search method
Resumo:
This work studied the drying kinetics of the organic fractions of municipal solid waste (MSW) samples with different initial moisture contents and presented a new method for determination of drying kinetic parameters. A series of drying experiments at different temperatures were performed by using a thermogravimetric technique. Based on the modified Page drying model and the general pattern search method, a new drying kinetic method was developed using multiple isothermal drying curves simultaneously. The new method fitted the experimental data more accurately than the traditional method. Drying kinetic behaviors under extrapolated conditions were also predicted and validated. The new method indicated that the drying activation energies for the samples with initial moisture contents of 31.1 and 17.2 % on wet basis were 25.97 and 24.73 kJ mol−1. These results are useful for drying process simulation and industrial dryer design. This new method can be also applied to determine the drying parameters of other materials with high reliability.
Resumo:
This book constitutes the refereed proceedings of the 14th International Conference on Parallel Problem Solving from Nature, PPSN 2016, held in Edinburgh, UK, in September 2016. The total of 93 revised full papers were carefully reviewed and selected from 224 submissions. The meeting began with four workshops which offered an ideal opportunity to explore specific topics in intelligent transportation Workshop, landscape-aware heuristic search, natural computing in scheduling and timetabling, and advances in multi-modal optimization. PPSN XIV also included sixteen free tutorials to give us all the opportunity to learn about new aspects: gray box optimization in theory; theory of evolutionary computation; graph-based and cartesian genetic programming; theory of parallel evolutionary algorithms; promoting diversity in evolutionary optimization: why and how; evolutionary multi-objective optimization; intelligent systems for smart cities; advances on multi-modal optimization; evolutionary computation in cryptography; evolutionary robotics - a practical guide to experiment with real hardware; evolutionary algorithms and hyper-heuristics; a bridge between optimization over manifolds and evolutionary computation; implementing evolutionary algorithms in the cloud; the attainment function approach to performance evaluation in EMO; runtime analysis of evolutionary algorithms: basic introduction; meta-model assisted (evolutionary) optimization. The papers are organized in topical sections on adaption, self-adaption and parameter tuning; differential evolution and swarm intelligence; dynamic, uncertain and constrained environments; genetic programming; multi-objective, many-objective and multi-level optimization; parallel algorithms and hardware issues; real-word applications and modeling; theory; diversity and landscape analysis.
Resumo:
Investigations of the optical response of subwavelength-structure arrays milled into thin metal films have revealed surprising phenomena, including reports of unexpectedly high transmission of light. Many studies have interpreted the optical coupling to the surface in terms of the resonant excitation of surface plasmon polaritons (SPPs), but other approaches involving composite diffraction of surface evanescent waves (CDEW) have also been proposed. Here we present a series of measurements on very simple one-dimensional subwavelength structures to test the key properties of the surface waves, and compare them to the CDEW and SPP models. We find that the optical response of the silver metal surface proceeds in two steps: a diffractive perturbation in the immediate vicinity (2–3 mu m) of the structure, followed by excitation of a persistent surface wave that propagates over tens of micrometres. The measured wavelength and phase of this persistent wave are significantly shifted from those expected for resonance excitation of a conventional SPP on a pure silver surface.
Resumo:
This paper proposes conceptual designs of multi-degree(s) of freedom (DOF) compliant parallel manipulators (CPMs) including 3-DOF translational CPMs and 6-DOF CPMs using a building block based pseudo-rigid-body-model (PRBM) approach. The proposed multi-DOF CPMs are composed of wire-beam based compliant mechanisms (WBBCMs) as distributed-compliance compliant building blocks (CBBs). Firstly, a comprehensive literature review for the design approaches of compliant mechanisms is conducted, and a building block based PRBM is then presented, which replaces the traditional kinematic sub-chain with an appropriate multi-DOF CBB. In order to obtain the decoupled 3-DOF translational CPMs (XYZ CPMs), two classes of kinematically decoupled 3-PPPR (P: prismatic joint, R: revolute joint) translational parallel mechanisms (TPMs) and 3-PPPRR TPMs are identified based on the type synthesis of rigid-body parallel mechanisms, and WBBCMs as the associated CBBs are further designed. Via replacing the traditional actuated P joint and the traditional passive PPR/PPRR sub-chain in each leg of the 3-DOF TPM with the counterpart CBBs (i.e. WBBCMs), a number of decoupled XYZ CPMs are obtained by appropriate arrangements. In order to obtain the decoupled 6-DOF CPMs, an orthogonally-arranged decoupled 6-PSS (S: spherical joint) parallel mechanism is first identified, and then two example 6-DOF CPMs are proposed by the building block based PRBM method. It is shown that, among these designs, two types of monolithic XYZ CPM designs with extended life have been presented.
Resumo:
This paper deals with a completely kinematostaticaly decoupled XY compliant parallel manipulator (CPM) composed of exactly-constrained compliant modules. A new 4-PP XY translational parallel mechanism (TPM) with a new topology structure is firstly proposed where each two P (P: prismatic) joints on the base in two non-adjacent legs are rigidly connected. A novel 4-PP XY CPM is then obtained by replacing each traditional P join on the base in the 4-PP XY TPM with a compound basic parallelogram module (CBPM) and replacing each traditional P joint on the motion stage with a basic parallelogram module (BPM). Approximate analytical model is derived with comparison to the FEA (finite element analysis) model and experiment for a case study. The proposed novel XY CPM has a compact configuration with good dynamics, and is able to well constrain the parasitic rotation and the cross-axis coupling of the motion stage. The cross-axis motion of the input stage can be completely eliminated, and the lost motion between the input stage and the motion stage is significantly reduced.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
In combination of the advantages of both parallel mechanisms and compliant mechanisms, a compliant parallel mechanism with two rotational DOFs (degrees of freedom) is designed to meet the requirement of a lightweight and compact pan-tilt platform. Firstly, two commonly-used design methods i.e. direct substitution and FACT (Freedom and Constraint Topology) are applied to design the configuration of the pan-tilt system, and similarities and differences of the two design alternatives are compared. Then inverse kinematic analysis of the candidate mechanism is implemented by using the pseudo-rigid-body model (PRBM), and the Jacobian related to its differential kinematics is further derived to help designer realize dynamic analysis of the 8R compliant mechanism. In addition, the mechanism’s maximum stress existing within its workspace is tested by finite element analysis. Finally, a method to determine joint damping of the flexure hinge is presented, which aims at exploring the effect of joint damping on actuator selection and real-time control. To the authors’ knowledge, almost no existing literature concerns with this issue.
Resumo:
With the importance of renewable energy well-established worldwide, and targets of such energy quantified in many cases, there exists a considerable interest in the assessment of wind and wave devices. While the individual components of these devices are often relatively well understood and the aspects of energy generation well researched, there seems to be a gap in the understanding of these devices as a whole and especially in the field of their dynamic responses under operational conditions. The mathematical modelling and estimation of their dynamic responses are more evolved but research directed towards testing of these devices still requires significant attention. Model-free indicators of the dynamic responses of these devices are important since it reflects the as-deployed behaviour of the devices when the exposure conditions are scaled reasonably correctly, along with the structural dimensions. This paper demonstrates how the Hurst exponent of the dynamic responses of a monopile exposed to different exposure conditions in an ocean wave basin can be used as a model-free indicator of various responses. The scaled model is exposed to Froude scaled waves and tested under different exposure conditions. The analysis and interpretation is carried out in a model-free and output-only environment, with only some preliminary ideas regarding the input of the system. The analysis indicates how the Hurst exponent can be an interesting descriptor to compare and contrast various scenarios of dynamic response conditions.
Resumo:
Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.
Resumo:
Ce mémoire de maîtrise traite de la théorie de la ruine, et plus spécialement des modèles actuariels avec surplus dans lesquels sont versés des dividendes. Nous étudions en détail un modèle appelé modèle gamma-omega, qui permet de jouer sur les moments de paiement de dividendes ainsi que sur une ruine non-standard de la compagnie. Plusieurs extensions de la littérature sont faites, motivées par des considérations liées à la solvabilité. La première consiste à adapter des résultats d’un article de 2011 à un nouveau modèle modifié grâce à l’ajout d’une contrainte de solvabilité. La seconde, plus conséquente, consiste à démontrer l’optimalité d’une stratégie de barrière pour le paiement des dividendes dans le modèle gamma-omega. La troisième concerne l’adaptation d’un théorème de 2003 sur l’optimalité des barrières en cas de contrainte de solvabilité, qui n’était pas démontré dans le cas des dividendes périodiques. Nous donnons aussi les résultats analogues à l’article de 2011 en cas de barrière sous la contrainte de solvabilité. Enfin, la dernière concerne deux différentes approches à adopter en cas de passage sous le seuil de ruine. Une liquidation forcée du surplus est mise en place dans un premier cas, en parallèle d’une liquidation à la première opportunité en cas de mauvaises prévisions de dividendes. Un processus d’injection de capital est expérimenté dans le deuxième cas. Nous étudions l’impact de ces solutions sur le montant des dividendes espérés. Des illustrations numériques sont proposées pour chaque section, lorsque cela s’avère pertinent.