973 resultados para Quantum Chromodynamics, Helicity Rates, One-Loop Corrections, Bremsstrahlung Contributions, Heavy Quarks, Standard Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heavy-ion collisions are a powerful tool to study hot and dense QCD matter, the so-called Quark Gluon Plasma (QGP). Since heavy quarks (charm and beauty) are dominantly produced in the early stages of the collision, they experience the complete evolution of the system. Measurements of electrons from heavy-flavour hadron decay is one possible way to study the interaction of these particles with the QGP. With ALICE at LHC, electrons can be identified with high efficiency and purity. A strong suppression of heavy-flavour decay electrons has been observed at high $p_{m T}$ in Pb-Pb collisions at 2.76 TeV. Measurements in p-Pb collisions are crucial to understand cold nuclear matter effects on heavy-flavour production in heavy-ion collisions. The spectrum of electrons from the decays of hadrons containing charm and beauty was measured in p-Pb collisions at $\\sqrt = 5.02$ TeV. The heavy flavour decay electrons were measured by using the Time Projection Chamber (TPC) and the Electromagnetic Calorimeter (EMCal) detectors from ALICE in the transverse-momentum range $2 < p_ < 20$ GeV/c. The measurements were done in two different data set: minimum bias collisions and data using the EMCal trigger. The non-heavy flavour electron background was removed using an invariant mass method. The results are compatible with one ($R_ \\approx$ 1) and the cold nuclear matter effects in p-Pb collisions are small for the electrons from heavy-flavour hadron decays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing, and pop-up width from model to nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demand for palliative care is increasing, yet there are few data on the best models of care nor well-validated interventions that translate current evidence into clinical practice. Supporting multidisciplinary patient-centered palliative care while successfully conducting a large clinical trial is a challenge. The Palliative Care Trial (PCT) is a pragmatic 2 x 2 x 2 factorial cluster randomized controlled trial that tests the ability of educational outreach visiting and case conferencing to improve patient-based outcomes such as performance status and pain intensity. Four hundred sixty-one consenting patients and their general practitioners (GPs) were randomized to the following: (1) GP educational outreach visiting versus usual care, (2) Structured patient and caregiver educational outreach visiting versus usual care and (3) A coordinated palliative care model of case conferencing versus the standard model of palliative care in Adelaide, South Australia (3:1 randomization). Main outcome measures included patient functional status over time, pain intensity, and resource utilization. Participants were followed longitudinally until death or November 30, 2004. The interventions are aimed at translating current evidence into clinical practice and there was particular attention in the trial's design to addressing common pitfalls for clinical studies in palliative care. Given the need for evidence about optimal interventions and service delivery models that improve the care of people with life-limiting illness, the results of this rigorous, high quality clinical trial will inform practice. Initial results are expected in mid 2005. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers a model-based approach to the clustering of tissue samples of a very large number of genes from microarray experiments. It is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. Frequently in practice, there are also clinical data available on those cases on which the tissue samples have been obtained. Here we investigate how to use the clinical data in conjunction with the microarray gene expression data to cluster the tissue samples. We propose two mixture model-based approaches in which the number of components in the mixture model corresponds to the number of clusters to be imposed on the tissue samples. One approach specifies the components of the mixture model to be the conditional distributions of the microarray data given the clinical data with the mixing proportions also conditioned on the latter data. Another takes the components of the mixture model to represent the joint distributions of the clinical and microarray data. The approaches are demonstrated on some breast cancer data, as studied recently in van't Veer et al. (2002).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In deregulated electricity market, modeling and forecasting the spot price present a number of challenges. By applying wavelet and support vector machine techniques, a new time series model for short term electricity price forecasting has been developed in this paper. The model employs both historical price and other important information, such as load capacity and weather (temperature), to forecast the price of one or more time steps ahead. The developed model has been evaluated with the actual data from Australian National Electricity Market. The simulation results demonstrated that the forecast model is capable of forecasting the electricity price with a reasonable forecasting accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desenvolvido em 1882 por Jigoro Kano a partir de seus estudos sobre as escolas de jujutsu, o Judô Kodokan surgiu dentro do espaço escolar a partir de três pilares básicos: como método de luta (arte marcial), como método de treinamento físico (educação física), como método de treinamento mental (desenvolvimento moral e intelectual) onde o Do (caminho) é o foco principal a ser ensinado em vista de beneficiar a sociedade. Uma das principais contribuições de Kano foi a transformação de uma prática de luta marcial em um método educativo. Tal processo ocorreu num momento histórico marcado por mudanças sociais no Japão que passou a receber forte influência do mundo ocidental durante a era Meiji. Naquela época os valores, pensamentos, instituições e linguagens orientais e ocidentais circulavam e se fundiam marcando um forte sincretismo em diversos espaços sociais. Teria Jigoro Kano absorvido essas influências ao desenvolver o judô? Essa possível ligação entre o Oriente e o Ocidente, preservando parte da cultura tradicional japonesa e permitindo a influência de pensamentos e práticas ocidentais, possui extrema relevância para a atualidade uma vez que muito se fala em retornar as formulações que deram origem ao Judô. Afinal, que formulações são estas e até que ponto devemos adotá-las sem uma profunda reflexão? No transcorrer da história e, mais precisamente ao final da 2ª guerra mundial, o judô perde boa parte dos conceitos e fundamentos que fazem sua ligação com a linguagem e o pensamento oriental bem como seu significado educativo original em função da sua expansão pelo mundo como prática esportiva. Assim o estudo tem como objetivos: 1) identificar os fundamentos do judô educativo segundo a influência do processo de integração Oriente-Ocidente; 2) Analisar a relação Oriente-Ocidente durante a transformação do Judô de método educativo para prática esportiva; 3) Organizar elementos estruturantes para estabelecer os fundamentos do judô educativo contemporâneo analisando a influência da integração Oriente-Ocidente nesse processo, partindo do modelo sistêmico de pensamento de Jigoro Kano ao elaborar o Judô Kodokan reorganizando suas referências conceituais a partir da Ciência da Motricidade Humana. Trata-se de um estudo teórico, de caráter bibliográfico, que se apropria da antropologia filosófica como suporte metodológico. Os resultados da pesquisa afirmam o processo de integração Oriente-Ocidente na formulação dos conceitos educativo/filosóficos do judô além das transformações dos sistemas simbólicos da luta que apontam sua evolução antropológico-filosófica, desde a prática do Bujutsu (Arte militar sec. XVII) no Japão medieval, até as perspectivas educativas contemporâneas apoiadas na Ciência da Motricidade Humana (CMH).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional differential scanning calorimetry (DSC) techniques are commonly used to quantify the solubility of drugs within polymeric-controlled delivery systems. However, the nature of the DSC experiment, and in particular the relatively slow heating rates employed, limit its use to the measurement of drug solubility at the drug's melting temperature. Here, we describe the application of hyper-DSC (HDSC), a variant of DSC involving extremely rapid heating rates, to the calculation of the solubility of a model drug, metronidazole, in silicone elastomer, and demonstrate that the faster heating rates permit the solubility to be calculated under non-equilibrium conditions such that the solubility better approximates that at the temperature of use. At a heating rate of 400°C/min (HDSC), metronidazole solubility was calculated to be 2.16 mg/g compared with 6.16 mg/g at 20°C/min. © 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – A binary integer programming model for the simple assembly line balancing problem (SALBP), which is well known as SALBP-1, was formulated more than 30 years ago. Since then, a number of researchers have extended the model for the variants of assembly line balancing problem.The model is still prevalent nowadays mainly because of the lower and upper bounds on task assignment. These properties avoid significant increase of decision variables. The purpose of this paper is to use an example to show that the model may lead to a confusing solution. Design/methodology/approach – The paper provides a remedial constraint set for the model to rectify the disordered sequence problem. Findings – The paper presents proof that the assembly line balancing model formulated by Patterson and Albracht may lead to a confusing solution. Originality/value – No one previously has found that the commonly used model is incorrect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigation of the different approaches used by Expert Systems researchers to solve problems in the domain of Mechanical Design and Expert Systems was carried out. The techniques used for conventional formal logic programming were compared with those used when applying Expert Systems concepts. A literature survey of design processes was also conducted with a view to adopting a suitable model of the design process. A model, comprising a variation on two established ones, was developed and applied to a problem within what are described as class 3 design tasks. The research explored the application of these concepts to Mechanical Engineering Design problems and their implementation on a microcomputer using an Expert System building tool. It was necessary to explore the use of Expert Systems in this manner so as to bridge the gap between their use as a control structure and for detailed analytical design. The former application is well researched into and this thesis discusses the latter. Some Expert System building tools available to the author at the beginning of his work were evaluated specifically for their suitability for Mechanical Engineering design problems. Microsynics was found to be the most suitable on which to implement a design problem because of its simple but powerful Semantic Net Knowledge Representation structure and the ability to use other types of representation schemes. Two major implementations were carried out. The first involved a design program for a Helical compression spring and the second a gearpair system design. Two concepts were proposed in the thesis for the modelling and implementation of design systems involving many equations. The method proposed enables equation manipulation and analysis using a combination of frames, semantic nets and production rules. The use of semantic nets for purposes other than for psychology and natural language interpretation, is quite new and represents one of the major contributions to knowledge by the author. The development of a purpose built shell program for this type of design problems was recommended as an extension of the research. Microsynics may usefully be used as a platform for this development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, I describe studies on fabrication, spectral characteristics and applications of tilted fibre gratings (TFGs) with small, large and 45° tilted structures and novel developments in fabrication of fibre Bragg gratings (FBGs) and long period gratings (LPGs) in normal silica and mid-infrared (mid-IR) glass fibres using near-IR femtosecond laser. One of the major contributions presented in this thesis is the systematic investigation of structures, inscription methods and spectral, polarisation dependent loss (PDL) and thermal characteristics of TFGs with small (<45°), large (>45°) and 45° tilted structures. I have experimentally characterised TFGs, obtaining relationships between the radiation angle, central wavelength of the radiation profile, Bragg resonance and the tilt angle, which are consistent with theoretical simulation based on the mode-coupling theory. Furthermore, thermal responses have been measured for these three types of TFGs, showing the transmission spectra of large and 45° TFGs are insensitive to the temperature change, unlike the normal and small angle tilted FBGs. Based on the distinctive optical properties, TFGs have been developed into interrogation system and sensors, which form the other significant contributions of the work presented in this thesis. The 10°-TFG based 800nm WDM interrogation system can function not just as an in-fibre spectrum analyser but also possess refractive index sensing capability. By utilising the unique polarisation properties, the 81 °-TFG based sensors are capable of sensing the transverse loading and twisting with sensitivities of 2.04pW/(kg/m) and 145.90pW/rad, repectively. The final but the most important contribution from the research work presented in this thesis is the development of novel grating inscription techniques using near-IR femtosecond laser. A number of LPGs and FBGs were successfully fabricated in normal silica and mid-IR glass fibres using point-by-point and phase-mask techniques. LPGs and 1st and 2nd order FBGs have been fabricated in these mid-IR glass fibres showing resonances covering the wavelength range from 1200 to 1700nm with the strengths up to 13dB. In addition, the thermal and strain sensitivities of these gratings have been systematically investigated. All the results from these initial but systematic works will provide useful function characteristics information for future fibre grating based devices and applications in mid-IR range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.