948 resultados para semi-empirical methods
Resumo:
Since core-collapse supernova simulations still struggle to produce robust neutrino-driven explosions in 3D, it has been proposed that asphericities caused by convection in the progenitor might facilitate shock revival by boosting the activity of non-radial hydrodynamic instabilities in the post-shock region. We investigate this scenario in depth using 42 relativistic 2D simulations with multigroup neutrino transport to examine the effects of velocity and density perturbations in the progenitor for different perturbation geometries that obey fundamental physical constraints (like the anelastic condition). As a framework for analysing our results, we introduce semi-empirical scaling laws relating neutrino heating, average turbulent velocities in the gain region, and the shock deformation in the saturation limit of non-radial instabilities. The squared turbulent Mach number, 〈Ma2〉, reflects the violence of aspherical motions in the gain layer, and explosive runaway occurs for 〈Ma2〉 ≳ 0.3, corresponding to a reduction of the critical neutrino luminosity by ∼25∼25 per cent compared to 1D. In the light of this theory, progenitor asphericities aid shock revival mainly by creating anisotropic mass flux on to the shock: differential infall efficiently converts velocity perturbations in the progenitor into density perturbations δρ/ρ at the shock of the order of the initial convective Mach number Maprog. The anisotropic mass flux and ram pressure deform the shock and thereby amplify post-shock turbulence. Large-scale (ℓ = 2, ℓ = 1) modes prove most conducive to shock revival, whereas small-scale perturbations require unrealistically high convective Mach numbers. Initial density perturbations in the progenitor are only of the order of Ma2progMaprog2 and therefore play a subdominant role.
Resumo:
Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.
Resumo:
Numerous studies of the dual-mode scramjet isolator, a critical component in preventing inlet unstart and/or vehicle loss by containing a collection of flow disturbances called a shock train, have been performed since the dual-mode propulsion cycle was introduced in the 1960s. Low momentum corner flow and other three-dimensional effects inherent to rectangular isolators have, however, been largely ignored in experimental studies of the boundary layer separation driven isolator shock train dynamics. Furthermore, the use of two dimensional diagnostic techniques in past works, be it single-perspective line-of-sight schlieren/shadowgraphy or single axis wall pressure measurements, have been unable to resolve the three-dimensional flow features inside the rectangular isolator. These flow characteristics need to be thoroughly understood if robust dual-mode scramjet designs are to be fielded. The work presented in this thesis is focused on experimentally analyzing shock train/boundary layer interactions from multiple perspectives in aspect ratio 1.0, 3.0, and 6.0 rectangular isolators with inflow Mach numbers ranging from 2.4 to 2.7. Secondary steady-state Computational Fluid Dynamics studies are performed to compare to the experimental results and to provide additional perspectives of the flow field. Specific issues that remain unresolved after decades of isolator shock train studies that are addressed in this work include the three-dimensional formation of the isolator shock train front, the spatial and temporal low momentum corner flow separation scales, the transient behavior of shock train/boundary layer interaction at specific coordinates along the isolator's lateral axis, and effects of the rectangular geometry on semi-empirical relations for shock train length prediction. A novel multiplane shadowgraph technique is developed to resolve the structure of the shock train along both the minor and major duct axis simultaneously. It is shown that the shock train front is of a hybrid oblique/normal nature. Initial low momentum corner flow separation spawns the formation of oblique shock planes which interact and proceed toward the center flow region, becoming more normal in the process. The hybrid structure becomes more two-dimensional as aspect ratio is increased but corner flow separation precedes center flow separation on the order of 1 duct height for all aspect ratios considered. Additional instantaneous oil flow surface visualization shows the symmetry of the three-dimensional shock train front around the lower wall centerline. Quantitative synthetic schlieren visualization shows the density gradient magnitude approximately double between the corner oblique and center flow normal structures. Fast response pressure measurements acquired near the corner region of the duct show preliminary separation in the outer regions preceding centerline separation on the order of 2 seconds. Non-intrusive Focusing Schlieren Deflectometry Velocimeter measurements reveal that both shock train oscillation frequency and velocity component decrease as measurements are taken away from centerline and towards the side-wall region, along with confirming the more two dimensional shock train front approximation for higher aspect ratios. An updated modification to Waltrup \& Billig's original semi-empirical shock train length relation for circular ducts based on centerline pressure measurements is introduced to account for rectangular isolator aspect ratio, upstream corner separation length scale, and major- and minor-axis boundary layer momentum thickness asymmetry. The latter is derived both experimentally and computationally and it is shown that the major-axis (side-wall) boundary layer has lower momentum thickness compared to the minor-axis (nozzle bounded) boundary layer, making it more separable. Furthermore, it is shown that the updated correlation drastically improves shock train length prediction capabilities in higher aspect ratio isolators. This thesis suggests that performance analysis of rectangular confined supersonic flow fields can no longer be based on observations and measurements obtained along a single axis alone. Knowledge gained by the work performed in this study will allow for the development of more robust shock train leading edge detection techniques and isolator designs which can greatly mitigate the risk of inlet unstart and/or vehicle loss in flight.
Resumo:
A two stage approach to performing ab initio calculations on medium and large sized molecules is described. The first step is to perform SCF calculations on small molecules or molecular fragments using the OPIT Program. This employs a small basis set of spherical and p-type Gaussian functions. The Gaussian functions can be identified very closely with atomic cores, bond pairs, lone pairs, etc. The position and exponent of any of the Gaussian functions can be varied by OPIT to produce a small but fully optimised basis set. The second stage is the molecular fragments method. As an example of this, Gaussian exponents and distances are taken from an OPIT calculation on ethylene and used unchanged in a single SCF calculation on benzene. Approximate ab initio calculations of this type give much useful information and are often preferable to semi-empirical approaches, since the nature of the approximations involved is much better defined.
Resumo:
Rigid adherence to pre-specified thresholds and static graphical representations can lead to incorrect decisions on merging of clusters. As an alternative to existing automated or semi-automated methods, we developed a visual analytics approach for performing hierarchical clustering analysis of short time-series gene expression data. Dynamic sliders control parameters such as the similarity threshold at which clusters are merged and the level of relative intra-cluster distinctiveness, which can be used to identify "weak-edges" within clusters. An expert user can drill down to further explore the dendrogram and detect nested clusters and outliers. This is done by using the sliders and by pointing and clicking on the representation to cut the branches of the tree in multiple-heights. A prototype of this tool has been developed in collaboration with a small group of biologists for analysing their own datasets. Initial feedback on the tool has been positive.
Resumo:
Choosing a single similarity threshold for cutting dendrograms is not sufficient for performing hierarchical clustering analysis of heterogeneous data sets. In addition, alternative automated or semi-automated methods that cut dendrograms in multiple levels make assumptions about the data in hand. In an attempt to help the user to find patterns in the data and resolve ambiguities in cluster assignments, we developed MLCut: a tool that provides visual support for exploring dendrograms of heterogeneous data sets in different levels of detail. The interactive exploration of the dendrogram is coordinated with a representation of the original data, shown as parallel coordinates. The tool supports three analysis steps. Firstly, a single-height similarity threshold can be applied using a dynamic slider to identify the main clusters. Secondly, a distinctiveness threshold can be applied using a second dynamic slider to identify “weak-edges” that indicate heterogeneity within clusters. Thirdly, the user can drill-down to further explore the dendrogram structure - always in relation to the original data - and cut the branches of the tree at multiple levels. Interactive drill-down is supported using mouse events such as hovering, pointing and clicking on elements of the dendrogram. Two prototypes of this tool have been developed in collaboration with a group of biologists for analysing their own data sets. We found that enabling the users to cut the tree at multiple levels, while viewing the effect in the original data, is a promising method for clustering which could lead to scientific discoveries.
Resumo:
Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time perception.
Resumo:
With the increasing importance given to building rehabilitation comes the need to create simple, fast and non-destructive testing methods (NDT) to identify problems and for anomaly diagnosis. Ceramic tiles are one of the most typical kinds of exterior wall cladding in several countries; the earliest known examples are Egyptian dating from 4000 BC. This type of building facade coating, though being quite often used in due to its aesthetic and architectural characteristics, is one of the most complex that can be applied given the several parts from which it is composed; hence, it is also one of the most difficult to correctly diagnose with expeditious methods. The detachment of ceramic wall tiles is probably the most common and difficult to identify anomaly associated with this kind of cladding and it is also definitely the one that can compromise security the most. Thus, it is necessary to study a process of inspection more efficient and economic than the currently used which often consist in semi-destructive methods (the most common is the pull off test), that can only be used in a small part of the building at a time, allowing some assumptions of what can the rest of the cladding be like. Infrared thermography (IRT) is a NDT with a wide variety of applications in building inspection that is becoming commonly used to identify anomalies related with thermal variations in the inspected surfaces. Few authors have studied the application of IRT in anomalies associated with ceramic claddings claiming that the presence of air or water beneath the superficial layer will influence the heat transfer in a way that can be detected in both a qualitative and a quantitative way by the thermal camera, providing information about the state of the wall in a much broad area per trial than other methods commonly used nowadays. This article intends to present a review of the state of art of this NDT and its potentiality in becoming a more efficient way to diagnose anomalies in ceramic wall claddings.
Resumo:
Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.
Resumo:
Nach der Biographie der österreichischen Pädagogin und Psychologin Elsa Köhler (1879-1940) werden in diesem Beitrag ihre Pionierleistungen bei der Grundlegung der empirischen Bildungsforschung beschrieben. Als Lehrerin war sie früh um den Einbezug des Entwicklungsstands von Schülern in die Didaktik im Sinne der Entwicklung differentieller Unterrichtsansätze bemüht. Am Psychologischen Institut der Universität Wien lernte sie bei Karl Bühler die für longitudinale Einzelfallanalysen der Entwicklung von Kindern und Jugendlichen konzipierten quantitativen und qualitativen Beobachtungs- und Protokolltechniken kennen und weitete diese Methoden als erste auf die pädagogische Situation im Unterricht, auf Schülergruppen und auf die Analyse der Entwicklung ganzer Schulklassen aus. Sie trug Wesentliches dazu bei, dass empirische Forschungsmethoden in reformpädagogische Ansätze der 1920er und 1930er Jahre Eingang fanden und machte ihre in der pädagogischen Situation durchgeführten Entwicklungsanalysen für die Entwicklungsberatung zur Optimierung der Selbststeuerung von Schülern fruchtbar. Elsa Köhler verband Grundlagenforschung mit einem starken Anwendungsbezug in den klassischen Bereichen der auf die Kindheit und das Jugendalter bezogenen Entwicklungspsychologie sowie in den Bereichen der Pädagogischen Psychologie und Pädagogik, die heute unter der Bildungsforschung subsumiert werden. Die Beschäftigung mit ihr ist von fachhistorischer Bedeutung und kann zudem auch Impulse für die moderne interdisziplinär ausgerichtete Bildungsforschung geben. (DIPF/Orig.)
Resumo:
The dissipation of triadimefon, {1-(4-chlorophenoxy)-3,3-dimethyl-1-(1H-1,2,4-triazol-1-yl)butanone}, was studied after its application to melon leaves, glass and paper, both in greenhouse and field conditions. The dissipation rate of triadimefon in its commercial formulation Bayleton 5 was found to be lower in greenhouse than field. The results for different samples in the same conditions show that the dissipation of triadimefon was found to be biphasic. This result can be accounted by a semi-empirical model which assumes an initial fast decline of the dissipation rate, attributed to an exponential decay of the volatilization rates, followed by a second phase where the dissipation is due to a first order degradation processes.The dissipation of triadimefon, {1-(4-chlorophenoxy)-3,3-dimethyl-1-(1H- 1,2,4-triazol-1-yl)butan-one}, was studied after its application to melon leaves, glass and paper, both in greenhouse and field conditions. The dissipation rate of triadimefon in its commercial formulation Bayleton 5 was found to be lower in greenhouse than field. The results for different samples in the same conditions show that the dissipation of triadimefon was found to be biphasic. This result can be accounted by a semi-empirical model which assumes an initial fast decline of the dissipation rate, attributed to an exponential decay of the volatilization rates, followed by a second phase where the dissipation is due to a first order degradation processes.
Resumo:
La fraction d’éjection du ventricule gauche est un excellent marqueur de la fonction cardiaque. Plusieurs techniques invasives ou non sont utilisées pour son calcul : l’angiographie, l’échocardiographie, la résonnance magnétique nucléaire cardiaque, le scanner cardiaque, la ventriculographie radioisotopique et l’étude de perfusion myocardique en médecine nucléaire. Plus de 40 ans de publications scientifiques encensent la ventriculographie radioisotopique pour sa rapidité d’exécution, sa disponibilité, son faible coût et sa reproductibilité intra-observateur et inter-observateur. La fraction d’éjection du ventricule gauche a été calculée chez 47 patients à deux reprises, par deux technologues, sur deux acquisitions distinctes selon trois méthodes : manuelle, automatique et semi-automatique. Les méthodes automatique et semi-automatique montrent dans l’ensemble une meilleure reproductibilité, une plus petite erreur standard de mesure et une plus petite différence minimale détectable. La méthode manuelle quant à elle fournit un résultat systématiquement et significativement inférieur aux deux autres méthodes. C’est la seule technique qui a montré une différence significative lors de l’analyse intra-observateur. Son erreur standard de mesure est de 40 à 50 % plus importante qu’avec les autres techniques, tout comme l’est sa différence minimale détectable. Bien que les trois méthodes soient d’excellentes techniques reproductibles pour l’évaluation de la fraction d’éjection du ventricule gauche, les estimations de la fiabilité des méthodes automatique et semi-automatique sont supérieures à celles de la méthode manuelle.
Resumo:
Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.
Resumo:
La fraction d’éjection du ventricule gauche est un excellent marqueur de la fonction cardiaque. Plusieurs techniques invasives ou non sont utilisées pour son calcul : l’angiographie, l’échocardiographie, la résonnance magnétique nucléaire cardiaque, le scanner cardiaque, la ventriculographie radioisotopique et l’étude de perfusion myocardique en médecine nucléaire. Plus de 40 ans de publications scientifiques encensent la ventriculographie radioisotopique pour sa rapidité d’exécution, sa disponibilité, son faible coût et sa reproductibilité intra-observateur et inter-observateur. La fraction d’éjection du ventricule gauche a été calculée chez 47 patients à deux reprises, par deux technologues, sur deux acquisitions distinctes selon trois méthodes : manuelle, automatique et semi-automatique. Les méthodes automatique et semi-automatique montrent dans l’ensemble une meilleure reproductibilité, une plus petite erreur standard de mesure et une plus petite différence minimale détectable. La méthode manuelle quant à elle fournit un résultat systématiquement et significativement inférieur aux deux autres méthodes. C’est la seule technique qui a montré une différence significative lors de l’analyse intra-observateur. Son erreur standard de mesure est de 40 à 50 % plus importante qu’avec les autres techniques, tout comme l’est sa différence minimale détectable. Bien que les trois méthodes soient d’excellentes techniques reproductibles pour l’évaluation de la fraction d’éjection du ventricule gauche, les estimations de la fiabilité des méthodes automatique et semi-automatique sont supérieures à celles de la méthode manuelle.
Resumo:
This manuscript reports the overall development of a Ph.D. research project during the “Mechanics and advanced engineering sciences” course at the Department of Industrial Engineering of the University of Bologna. The project is focused on the development of a combustion control system for an innovative Spark Ignited engine layout. In details, the controller is oriented to manage a prototypal engine equipped with a Port Water Injection system. The water injection technology allows an increment of combustion efficiency due to the knock mitigation effect that permits to keep the combustion phasing closer to the optimal position with respect to the traditional layout. At the beginning of the project, the effects and the possible benefits achievable by water injection have been investigated by a focused experimental campaign. Then the data obtained by combustion analysis have been processed to design a control-oriented combustion model. The model identifies the correlation between Spark Advance, combustion phasing and injected water mass, and two different strategies are presented, both based on an analytic and semi-empirical approach and therefore compatible with a real-time application. The model has been implemented in a combustion controller that manages water injection to reach the best achievable combustion efficiency while keeping knock levels under a pre-established threshold. Three different versions of the algorithm are described in detail. This controller has been designed and pre-calibrated in a software-in-the-loop environment and later an experimental validation has been performed with a rapid control prototyping approach to highlight the performance of the system on real set-up. To further make the strategy implementable on an onboard application, an estimation algorithm of combustion phasing, necessary for the controller, has been developed during the last phase of the PhD Course, based on accelerometric signals.