220 resultados para Speci
Resumo:
In direct drive Inertial Confinement Fusion (ICF), the typical laser beam to laser beam angle is around 30o. This fact makes the study of the irradiation symmetry agenuine 3D problem. In this paper we use the three dimensional version of the MULTI hydrocode to assess the symmetry of such ICF implosions. More specifically, we study a shock-ignition proposal for the Laser-M´egajoule facility (LMJ) in which two of the equatorial beam cones are used to implode and pre compress a spherical capsule (the “reference” capsule of HiPER project) made of 0.59 mg of pure Deuterium-Tritium mixture. The symmetry of this scheme is analysed and optimized to get a design inside the operating limits of LMJ. The studied configuration has been found essentially axial-symmetric, so that the use of 2D hydrocodes would be appropriate for this specific situation.
Resumo:
The implementation of boundary conditions is one of the points where the SPH methodology still has some work to do. The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [1] boundary integrals. A Pouseuille flow has been used as a example to gradually evaluate the accuracy of the different implementations. Our goal is to test the behavior of the second-order differential operator with the proposed boundary extensions when the smoothing length h and other dicretization parameters as dx/h tend simultaneously to zero. First, using a smoothed continuous approximation of the unidirectional Pouseuille problem, the evolution of the velocity profile has been studied focusing on the values of the velocity and the viscous shear at the boundaries, where the exact solution should be approximated as h decreases. Second, to evaluate the impact of the discretization of the problem, an Eulerian SPH discrete version of the former problem has been implemented and similar results have been monitored. Finally, for the sake of completeness, a 2D Lagrangian SPH implementation of the problem has been also studied to compare the consequences of the particle movement
Resumo:
Canonical test cases for sloshing wave impact problems are pre-sented and discussed. In these cases the experimental setup has been simpli?ed seeking the highest feasible repeatability; a rectangular tank subjected to harmonic roll motion has been the tested con?guration. Both lateral and roof impacts have been studied, since both cases are relevant in sloshing assessment and show speci?c dynamics. An analysis of the impact pressure of the ?rst four impact events is provided in all cases. It has been found that not in all cases a Gaussian ?tting of each individual peak is feasible. The tests have been conducted with both water and oil in order to obtain high and moderate Reynolds number data; the latter may be useful as simpler test cases to assess the capabilities of CFD codes in simulating sloshing impacts. The re-peatability of impact pressure values increases dramatically when using oil. In addition, a study of the two-dimensionality of the problem using a tank con?guration that can be adjusted to 4 di?erent thicknesses has been carried out. Though the kinemat-ics of the free surface does not change signi cantly in some of the cases, the impact pressure values of the ?rst impact events changes substantially from the small to the large aspect ratios thus meaning that attention has to be paid to this issue when reference data is used for validation of 2D and 3D CFD codes.
Resumo:
Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.
Resumo:
In order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM (orthogonal frequency division multiplexing) can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users’ scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user’s channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.
Resumo:
We experimentally demonstrate a sigmoidal variation of the composition profile across semiconductor heterointerfaces. The wide range of material systems (III-arsenides, III-antimonides, III-V quaternary compounds, III-nitrides) exhibiting such a profile suggests a universal behavior. We show that sigmoidal profiles emerge from a simple model of cooperative growth mediated by twodimensional island formation, wherein cooperative effects are described by a specific functional dependence of the sticking coefficient on the surface coverage. Experimental results confirm that, except in the very early stages, island growth prevails over nucleation as the mechanism governing the interface development and ultimately determines the sigmoidal shape of the chemical profile in these two-dimensional grown layers. In agreement with our experimental findings, the model also predicts a minimum value of the interfacial width, with the minimum attainable value depending on the chemical identity of the species.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
Through progress in medical imaging, image analysis and finite element (FE) meshing tools it is now possible to extract patient-specific geometries from medical images of abdominal aortic aneurysms(AAAs), and thus to study clinically-relevant problems via FE simulations. Such simulations allow additional insight into human physiology in both healthy and diseased states. Medical imaging is most often performed in vivo, and hence the reconstructed model geometry in the problem of interest will represent the in vivo state, e.g., the AAA at physiological blood pressure. However, classical continuum mechanics and FE methods assume that constitutive models and the corresponding simulations begin from an unloaded, stress-free reference condition.
Resumo:
The aim of automatic pathological voice detection systems is to serve as tools, to medical specialists, for a more objective, less invasive and improved diagnosis of diseases. In this respect, the gold standard for those system include the usage of a optimized representation of the spectral envelope, either based on cepstral coefficients from the mel-scaled Fourier spectral envelope (Mel-Frequency Cepstral Coefficients) or from an all-pole estimation (Linear Prediction Coding Cepstral Coefficients) forcharacterization, and Gaussian Mixture Models for posterior classification. However, the study of recently proposed GMM-based classifiers as well as Nuisance mitigation techniques, such as those employed in speaker recognition, has not been widely considered inpathology detection labours. The present work aims at testing whether or not the employment of such speaker recognition tools might contribute to improve system performance in pathology detection systems, specifically in the automatic detection of Obstructive Sleep Apnea. The testing procedure employs an Obstructive Sleep Apnea database, in conjunction with GMM-based classifiers looking for a better performance. The results show that an improved performance might be obtained by using such approach.
Resumo:
In direct drive Inertial Confinement Fusion (ICF), the typical laser beam to laser beam angle is around 30o. This fact makes the study of the irradiation symmetry agenuine 3D problem. In this paper we use the three dimensional version of the MULTI hydrocode to assess the symmetry of such ICF implosions. More specifically, we study a shock-ignition proposal for the Laser-M´egajoule facility (LMJ) in which two of the equatorial beam cones are used to implode and pre compress a spherical capsule (the “reference” capsule of HiPER project) made of 0.59 mg of pure Deuterium-Tritium mixture. The symmetry of this scheme is analysed and optimized to get a design inside the operating limits of LMJ. The studied configuration has been found essentially axial-symmetric, so that the use of 2D hydrocodes would be appropriate for this specific situation
Resumo:
The singularities in Dromo are characterized in this paper, both from an analytical and a numerical perspective. When the angular momentum vanishes, Dromo may encounter a singularity in the evolution equations. The cancellation of the angular momentum occurs in very speci?c situations and may be caused by the action of strong perturbations. The gravitational attraction of a perturbing planet may lead to rapid changes in the angular momentum of the particle. In practice, this situation may be encountered during deep planetocentric ?ybys. The performance of Dromo is evaluated in di?erent scenarios. First, Dromo is validated for integrating the orbit of Near Earth Asteroids. Resulting errors are of the order of the diameter of the asteroid. Second, a set of theoretical ?ybys are designed for analyzing the performance of the formulation in the vicinity of the singularity. New sets of Dromo variables are proposed in order to minimize the dependency of Dromo on the angular momentum. A slower time scale is introduced, leading to a more stable description of the ?yby phase. Improvements in the overall performance of the algorithm are observed when integrating orbits close to the singularity.
Resumo:
This study investigates decision making in mental health care. Specifically, it compares the diagnostic decision outcomes (i.e., the qualityof diagnoses) and the diagnostic decision process (i.e., pre-decisional information acquisition patterns) of novice and experienced clinicalpsychologists. Participants’ eye movements were recorded while they completed diagnostic tasks, classifying mental disorders. In line withprevious research, our findings indicate that diagnosticians’ performance is not related to their clinical experience. Eye-tracking data pro-vide corroborative evidence for this result from the process perspective: experience does not predict changes in cue inspection patterns. Forfuture research into expertise in this domain, it is advisable to track individual differences between clinicians rather than study differenceson the group level.
Resumo:
The purpose of this paper was to evaluate the psychometric properties of a stage-specific selfefficacy scale for physical activity with classical test theory (CTT), confirmatory factor analysis (CFA) and item response modeling (IRM). Women who enrolled in the Women On The Move study completed a 20-item stage-specific self-efficacy scale developed for this study [n = 226, 51.1% African-American and 48.9% Hispanic women, mean age = 49.2 (67.0) years, mean body mass index = 29.7 (66.4)]. Three analyses were conducted: (i) a CTT item analysis, (ii) a CFA to validate the factor structure and (iii) an IRM analysis. The CTT item analysis and the CFA results showed that the scale had high internal consistency (ranging from 0.76 to 0.93) and a strong factor structure. Results also showed that the scale could be improved by modifying or eliminating some of the existing items without significantly altering the content of the scale. The IRM results also showed that the scale had few items that targeted high self-efficacy and the stage-specific assumption underlying the scale was rejected. In addition, the IRM analyses found that the five-point response format functioned more like a four-point response format. Overall, employing multiple methods to assess the psychometric properties of the stage-specific self-efficacy scale demonstrated the complimentary nature of these methods and it highlighted the strengths and weaknesses of this scale.