904 resultados para Model based control


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a surrogate-model-based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine's previous operational performance, the DFIG's stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization-based surrogate optimization techniques are used in conjunction with the finite element method to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Setting out from the database of Operophtera brumata, L. in between 1973 and 2000 due to the Light Trap Network in Hungary, we introduce a simple theta-logistic population dynamical model based on endogenous and exogenous factors, only. We create an indicator set from which we can choose some elements with which we can improve the fitting results the most effectively. Than we extend the basic simple model with additive climatic factors. The parameter optimization is based on the minimized root mean square error. The best model is chosen according to the Akaike Information Criterion. Finally we run the calibrated extended model with daily outputs of the regional climate model RegCM3.1, regarding 1961-1990 as reference period and 2021-2050 with 2071-2100 as future predictions. The results of the three time intervals are fitted with Beta distributions and compared statistically. The expected changes are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The cause for childhood acute lymphoblastic leukemia (ALL) remains unknown, but male gender is a risk factor, and among ethnicities, Hispanics have the highest risk. In this dissertation, we explored correlations among genetic polymorphisms, birth characteristics, and the risk of childhood ALL in a multi-ethnic sample in 161 cases and 231 controls recruited contemporaneously (2007-2012) in Houston, TX. We first examined three lymphoma risk markers, since lymphoma and ALL both stem from lymphoid cells. Of these, rs2395185 showed a risk association in non-Hispanic White males (OR=2.8, P=0.02; P interaction=0.03 for gender), but not in Hispanics. We verified previously known risk associations to validate the case-control sample. Mutations of HFE (C282Y, H63D) were genotyped to test whether iron-regulatory gene (IRG) variants known to elevate iron levels increase childhood ALL risk. Being positive for either polymorphism yielded only a modestly elevated OR in males, which increased to 2.96 (P=0.01) in the presence of a particular transferrin receptor (TFRC) genotype for rs3817672 (Pinteraction=0.04). SNP rs3817672 itself showed an ethnicity-specific association (P interaction=0.02 for ethnicity). We then examined additional IRG SNPs (rs422982, rs855791, rs733655), which showed risk associations in males (ORs=1.52 to 2.60). A polygenic model based on the number of polymorphic alleles in five IRG SNPs revealed a linear increase in risk (OR=2.00 per incremental change; P=0.002). Having three or more alleles compared with none was associated with increased risk in males (OR=4.12; P=0.004). Significant risk associations with childhood ALL was found with birth length (OR=1.18 per inch, P=0.04), high birth weight (>4,000g) (OR=1.93, P=0.01), and with gestational age (OR=1.10 per week, P=0.04). We observed a negative correlation between HFE SNP rs9366637 and gestational age (P=0.005), again, stronger in males ( P=0.001) and interacting with TFRC (P interaction=0.05). Our results showed that (i) ALL risk markers do not show universal associations across ethnicities or between genders, (ii) IRG SNPs modify ALL risk presumably by their effects on iron levels, (iii) a negative correlation between an HFE SNP and gestational age exists, which implicates an iron-related mechanism. The results suggest that currently unregulated supplemental iron intake may have implications on childhood ALL development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The humanity reached a time of unprecedented technological development. Science has achieved and continues to achieve technologies that allowed increasingly to understand the universe and the laws which govern it, and also try to coexist without destroying the planet we live on. One of the main challenges of the XXI century is to seek and increase new sources of clean energy, renewable and able to sustain our growth and lifestyle. It is the duty of every researcher engage and contribute in this race of energy. In this context, wind power presents itself as one of the great promises for the future of electricity generation . Despite being a bit older than other sources of renewable energy, wind power still presents a wide field for improvement. The development of new techniques for control of the generator along with the development of research laboratories specializing in wind generation are one of the key points to improve the performance, efficiency and reliability of the system. Appropriate control of back-to-back converter scheme allows wind turbines based on the doubly-fed induction generator to operate in the variable-speed mode, whose benefits include maximum power extraction, reactive power injection and mechanical stress reduction. The generator-side converter provides control of active and reactive power injected into the grid, whereas the grid-side converter provides control of the DC link voltage and bi-directional power flow. The conventional control structure uses PI controllers with feed-forward compensation of cross-coupling dq terms. This control technique is sensitive to model uncertainties and the compensation of dynamic dq terms results on a competing control strategy. Therefore, to overcome these problems, it is proposed in this thesis a robust internal model based state-feedback control structure in order to eliminate the cross-coupling terms and thereby improve the generator drive as well as its dynamic behavior during sudden changes in wind speed. It is compared the conventional control approach with the proposed control technique for DFIG wind turbine control under both steady and gust wind conditions. Moreover, it is also proposed in this thesis an wind turbine emulator, which was developed to recreate in laboratory a realistic condition and to submit the generator to several wind speed conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: Traditionally, it has been thought that no binocular combination occurs in amblyopia. However, there is a growing body of evidence that there are intact binocular mechanisms in amblyopia rendered inactive under normal viewing conditions due to imbalanced monocular inputs. Georgeson and Wallis (2014) recently introduced a novel method to investigate fusion, suppression and diplopia in normal population. We have modified this method to assess binocular interactions in amblyopia. Methods: Ten amblyopic and ten control subjects viewed briefly-presented (200 ms) pairs of dichoptically separated horizontal Gaussian blurred edges. Subjects reported one central edge, one offset edge, or a double edge as the vertical disparity was manipulated. The experiment was conducted at a range of spatial scales (blur widths of 4, 8, 16, and 32 arc min) and contrasts. Our model, based Georgeson and Wallis (2014), converted subjects’ responses into probabilities of fusion, suppression, and diplopia. Results: When the normal participants were presented equal contrast to each eye the probability of fusion gradually decreased with increasing disparity, as the probability of diplopia gradually increased. In only a small proportion of the trials, normal participants experienced suppression. The pattern was consistent across all edge blurs. Interestingly, the majority of amblyopes had a comparable pattern of fusion, i.e. decreasing probability with increasing disparity. However, with increasing disparity the amblyopes tended to suppress the amblyopic eye, experiencing diplopia only in a small proportion of trials particularly at large blurs. Increasing the interocular contrast offset favouring the amblyopic eye normalized the pattern of data in a way similar to normal participants. There were some interesting exceptions: strong suppressors for which our contrast range was inadequate and one case in which diplopia dominated. Conclusions: This task is suitable for assessing binocular interactions in amblyopic participants and providing a way to quantify the relationship between fusion, suppression and diplopia. In agreement with previous studies, our data indicate the presence of binocular mechanisms in amblyopia. A contrast offset favouring the amblyopic eye normalizes the measured binocular interactions in the amblyopic visual system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present the first ecosystem-scale methane flux data from a northern Siberian tundra ecosystem covering the entire snow-free period from spring thaw until initial freeze-back. Eddy covariance measurements of methane emission were carried out from the beginning of June until the end of September in the southern central part of the Lena River Delta (72°22' N, 126°30' E). The study site is located in the zone of continuous permafrost and is characterized by Arctic continental climate with very low precipitation and a mean annual temperature of -14.7°C. We found relatively low fluxes of on average 18.7 mg/m**2/d, which we consider to be because of (1) extremely cold permafrost, (2) substrate limitation of the methanogenic archaea, and (3) a relatively high surface coverage of noninundated, moderately moist areas. Near-surface turbulence as measured by the eddy covariance system in 4 m above the ground surface was identified as the most important control on ecosystem-scale methane emission and explained about 60% of the variance in emissions, while soil temperature explained only 8%. In addition, atmospheric pressure was found to significantly improve an exponential model based on turbulence and soil temperature. Ebullition from waterlogged areas triggered by decreasing atmospheric pressure and near-surface turbulence is thought to be an important pathway that warrants more attention in future studies. The close coupling of methane fluxes and atmospheric parameters demonstrated here raises questions regarding the reliability of enclosure-based measurements, which inherently exclude these parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Climatic changes cause alterations in circulation patterns of the world oceans. The highly saline Mediterranean Outflow Water (MOW), built within the Mediterranean Sea crosses the Strait of Gibraltar in westerly directions, turning north-westward to stick to the Iberian Slope within 600-1500m water depths. Circulation pattern and current speed of the MOW are strongly influenced by climatically induced variations and thus control sedimentation processes along the South- and West - Iberian Continental Slope. Sedimentation characteristics of the investigated area are therefore suitable to reconstruct temporal hydrodynamic changes of the MOW. Detailed investigations on the silt-sized grain distribution, physical properties and hydroacoustic data were performed to recalculate paleo-current-velocities and to understand the sedimentation history in the Golf of Cadiz and the Portuguese Continental Slope. A time model based on d18Odata and 14C-datings of planktic foraminifera allowed the stratigraphical classification of the core material and thus the dating of the current induced sediment layers showing the variations of paleo-current intensities. The evaluation and interpretation of the gathered data sets enabled us to reconstruct lateral and temporal sedimentation patterns of the MOW for the Holocene and the late Pleistocene, back to the Last Glacial Maximum (LGM).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software development guidelines are a set of rules which can help improve the quality of software. These rules are defined on the basis of experience gained by the software development community over time. This paper discusses a set of design guidelines for model-based development of complex real-time embedded software systems. To be precise, we propose nine design conventions, three design patterns and thirteen antipatterns for developing UML-RT models. These guidelines have been identified based on our analysis of around 100 UML-RT models from industry and academia. Most of the guidelines are explained with the help of examples, and standard templates from the current state of the art are used for documenting the design rules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Control of the collective response of plasma particles to intense laser light is intrinsic to relativistic optics, the development of compact laser-driven particle and radiation sources, as well as investigations of some laboratory astrophysics phenomena. We recently demonstrated that a relativistic plasma aperture produced in an ultra-thin foil at the focus of intense laser radiation can induce diffraction, enabling polarization-based control of the collective motion of plasma electrons. Here we show that under these conditions the electron dynamics are mapped into the beam of protons accelerated via strong charge-separation-induced electrostatic fields. It is demonstrated experimentally and numerically via 3D particle-in-cell simulations that the degree of ellipticity of the laser polarization strongly influences the spatial-intensity distribution of the beam of multi-MeV protons. The influence on both sheath-accelerated and radiation pressure-accelerated protons is investigated. This approach opens up a potential new route to control laser-driven ion sources.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding the magnetic properties of graphenic nanostructures is instrumental in future spintronics applications. These magnetic properties are known to depend crucially on the presence of defects. Here we review our recent theoretical studies using density functional calculations on two types of defects in carbon nanostructures: Substitutional doping with transition metals, and sp$^3$-type defects created by covalent functionalization with organic and inorganic molecules. We focus on such defects because they can be used to create and control magnetism in graphene-based materials. Our main results are summarized as follows: i)Substitutional metal impurities are fully understood using a model based on the hybridization between the $d$ states of the metal atom and the defect levels associated with an unreconstructed D$_{3h}$ carbon vacancy. We identify three different regimes, associated with the occupation of distinct hybridization levels, which determine the magnetic properties obtained with this type of doping; ii) A spin moment of 1.0 $\mu_B$ is always induced by chemical functionalization when a molecule chemisorbs on a graphene layer via a single C-C (or other weakly polar) covalent bond. The magnetic coupling between adsorbates shows a key dependence on the sublattice adsorption site. This effect is similar to that of H adsorption, however, with universal character; iii) The spin moment of substitutional metal impurities can be controlled using strain. In particular, we show that although Ni substitutionals are non-magnetic in flat and unstrained graphene, the magnetism of these defects can be activated by applying either uniaxial strain or curvature to the graphene layer. All these results provide key information about formation and control of defect-induced magnetism in graphene and related materials.