964 resultados para Plug Flow With Axial Dispersion Model
Resumo:
In dieser Arbeit wurden wässrige Suspensionen ladungsstabilisierter kolloidaler Partikel bezüglich ihres Verhaltens unter dem Einfluss elektrischer Felder untersucht. Insbesondere wurde die elektrophoretische Mobilität µ über einen weiten Partikelkonzentrationsbereich studiert, um das individuelle Verhalten einzelner Partikel mit dem bisher nur wenig untersuchten kollektiven Verhalten von Partikelensembles (speziell von fluid oder kristallin geordneten Ensembles) zu vergleichen. Dazu wurde ein superheterodynes Dopplervelocimetrisches Lichtstreuexperiment mit integraler und lokaler Datenerfassung konzipiert, das es erlaubt, die Geschwindigkeit der Partikel in elektrischen Feldern zu studieren. Das Experiment wurde zunächst erfolgreich im Bereich nicht-ordnender und fluid geordneter Suspensionen getestet. Danach konnte mit diesem Gerät erstmals das elektrophoretische Verhalten von kristallin geordneten Suspensionen untersucht werden. Es wurde ein komplexes Fließverhalten beobachtet und ausführlich dokumentiert. Dabei wurden bisher in diesem Zusammenhang noch nicht beobachtete Effekte wie Blockfluss, Scherbandbildung, Scherschmelzen oder elastische Resonanzen gefunden. Andererseits machte dieses Verhalten die Entwicklung einer neuen Auswertungsroutine für µ im kristallinen Zustand notwendig, wozu die heterodyne Lichtstreutheorie auf den superheterodynen Fall mit Verscherung erweitert werden musste. Dies wurde zunächst für nicht geordnete Systeme durchgeführt. Diese genäherte Beschreibung genügte, um unter den gegebenen Versuchbedingungen auch das Lichtstreuverhalten gescherter kristalliner Systeme zu interpretieren. Damit konnte als weiteres wichtiges Resultat eine generelle Mobilitäts-Konzentrations-Kurve erhalten werden. Diese zeigt bei geringen Partikelkonzentrationen den bereits bekannten Anstieg und bei mittleren Konzentrationen ein Plateau. Bei hohen Konzentrationen sinkt die Mobilität wieder ab. Zur Interpretation dieses Verhaltens bzgl. Partikelladung stehen derzeit nur Theorien für nicht wechselwirkende Partikel zur Verfügung. Wendet man diese an, so findet man eine überraschend gute Übereinstimmung der elektrophoretisch bestimmten Partikelladung Z*µ mit numerisch bestimmten effektiven Partikelladungen Z*PBC.
Resumo:
The standard model (SM) of particle physics is a theory, describing three out of four fundamental forces. In this model the Cabibbo-Kobayashi-Maskawa (CKM) matrix describes the transformation between the mass and weak eigenstates of quarks. The matrix properties can be visualized as triangles in the complex plane. A precise measurement of all triangle parameters can be used to verify the validity of the SM. The least precisely measured parameter of the triangle is related to the CKM element |Vtd|, accessible through the mixing frequency (oscillation) of neutral B mesons, where mixing is the transition of a neutral meson into its anti-particle and vice versa. It is possible to calculate the CKM element |Vtd| and a related element |Vts| by measuring the mass differences Dmd (Dms ) between neutral Bd and bar{Bd} (Bs and bar{Bs}) meson mass eigenstates. This measurement is accomplished by tagging the initial and final state of decaying B mesons and determining their lifetime. Currently the Fermilab Tevatron Collider (providing pbar{p} collisions at sqrt{s}=1.96 TeV) is the only place, where Bs oscillations can be studied. The first selection of the "golden", fully hadronic decay mode Bs->Ds pi(phi pi)X at DØ is presented in this thesis. All data, taken between April 2002 and August 2007 with the DØ detector, corresponding to an integrated luminosity of int{L}dt=2.8/fb is used. The oscillation frequency Dms and the ratio |Vtd|/|Vts| are determined as Dms = (16.6 +0.5-0.4(stat) +0.4-0.3(sys)) 1/ps, |Vtd|/|Vts| = 0.213 +0.004-0.003(exp)pm 0.008(theor). These results are consistent with the standard model expectations and no evidence for new physics is observable.
Resumo:
Multidetector row computed tomography over the last decade is commonly used in veterinary medicine. This new technology has an increased spatial and temporal resolution, could evaluate wider scanning range in shorter scanning time, providing an advanced imaging modality. Computed tomography angiographic studies are commonly used in veterinary medicine in order to evaluate vascular structures of the abdomen and the thorax. Pulmonary pathology in feline patients is a very common condition and usually is further evaluating with computed tomography. Up to date few references of the normal computed tomographic aspects of the feline thorax are reported. In this study a computed tomographic pulmonary angiography (CTPA) protocol is reported in normal cats and is compared with the up to date anatomical references. A CTPA protocol using a 64 MDCT in our study achieved high resolution images of the pulmonary arteries, pulmonary veins and bronchial lumen till the level of minor segmental branches. Feline pulmonary bronchial parenchyma demonstrates an architecture of mixed type with a monopedial model observed in the most anatomical parts and the dichotomic aspect is seen at the accessory lobe. The arterial and venous architecture is similar to the bronchial. Statistical analysis demonstrates the linear correlation of tracheal diameter to the felines weight. Vascular variations were noticed. The pulmonary venous system enters into the left atrium through three ostia (left cranial ostia: consisted of the anastomosis of the cranial and caudal portion of the left cranial pulmonary vein; right ostia: consisted of the anastomosis of the right cranial and middle pulmonary vein; and the caudal ostia: consisted of the anastomosis of the right and left caudal pulmonary vein). In conclusion CTPA is applicable in feline patients and provides an excellent imaging of the pulmonary arterial, venous and bronchial system till the level of minor segmental branches.
Resumo:
In this thesis, a strategy to model the behavior of fluids and their interaction with deformable bodies is proposed. The fluid domain is modeled by using the lattice Boltzmann method, thus analyzing the fluid dynamics by a mesoscopic point of view. It has been proved that the solution provided by this method is equivalent to solve the Navier-Stokes equations for an incompressible flow with a second-order accuracy. Slender elastic structures idealized through beam finite elements are used. Large displacements are accounted for by using the corotational formulation. Structural dynamics is computed by using the Time Discontinuous Galerkin method. Therefore, two different solution procedures are used, one for the fluid domain and the other for the structural part, respectively. These two solvers need to communicate and to transfer each other several information, i.e. stresses, velocities, displacements. In order to guarantee a continuous, effective, and mutual exchange of information, a coupling strategy, consisting of three different algorithms, has been developed and numerically tested. In particular, the effectiveness of the three algorithms is shown in terms of interface energy artificially produced by the approximate fulfilling of compatibility and equilibrium conditions at the fluid-structure interface. The proposed coupled approach is used in order to solve different fluid-structure interaction problems, i.e. cantilever beams immersed in a viscous fluid, the impact of the hull of the ship on the marine free-surface, blood flow in a deformable vessels, and even flapping wings simulating the take-off of a butterfly. The good results achieved in each application highlight the effectiveness of the proposed methodology and of the C++ developed software to successfully approach several two-dimensional fluid-structure interaction problems.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Resumo:
One of the main goals of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN in Geneva is the search for new physics beyond the Standard Model. In 2011, proton-proton collisions were performed at the LHC at a center of mass energy of 7 TeV and an integrated luminosity of 4.7 fb^{-1} was recorded. This dataset can be tested for one of the most promising theories beyond limits achieved thus far: supersymmetry. Final states in supersymmetry events at the LHC contain highly energetic jets and sizeable missing transverse energy. The additional requirement of events with highly energetic leptons simplifies the control of the backgrounds. This work presents results of a search for supersymmetry in the inclusive dilepton channel. Special emphasis is put on the search within the Gauge-Mediated Symmetry Breaking (GMSB) scenario in which the supersymmetry breaking is mediated via gauge fields. Statistically independent Control Regionsrnfor the dominant Standard Model backgrounds as well as Signal Regions for a discovery of a possible supersymmetry signal are defined and optimized. A simultaneous fit of the background normalizations in the Control Regions via the profile likelihood method allows for a precise prediction of the backgrounds in the Signal Regions and thus increases the sensitivity to several supersymmetry models. Systematic uncertainties on the background prediction are constrained via the jet multiplicity distribution in the Control Regions driven by data. The observed data are consistent with the Standard Model expectation. New limits within the GMSB and the minimal Supergravity (mSUGRA) scenario as well as for several simplified supersymmetry models are set or extended.
Resumo:
The Bora wind is a mesoscale phenomenon which typically affects the Adriatic Sea basin for several days each year, especially during winter. The Bora wind has been studied for its intense outbreak across the Dinaric Alps. The properties of the Bora wind are widely discussed in the literature and scientific papers usually focus on the eastern Adriatic coast where strong turbulence and severe gust intensity are more pronounced. However, the impact of the Bora wind can be significant also over Italy, not only in terms of wind speed instensity. Depending on the synoptic pressure pattern (cyclonic or anticyclonic Bora) and on the season, heavy snowfall, severe storms, storm surges and floods can occur along the Adriatic coast and on the windward flanks of the Apennines. In the present work five Bora cases that occurred in recent years have been selected and their evolution has been simulated with the BOLAM-MOLOCH model set, developed at ISAC-CNR in Bologna. Each case study has been addressed by a control run and by several sensitivity tests, performed with the purpose of better understanding the role played by air-sea latent and sensible heat fluxes. The tests show that the removal of the fluxes induces modifications in the wind approching the coast and a decrease of the total precipitation amount predicted over Italy. In order to assess the role of heat fluxes, further analysis has been carried out: column integrated water vapour fluxes have been computed along the Italian coastline and an atmospheric water balance has been evaluated inside a box volume over the Adriatic Sea. The balance computation shows that, although latent heat flux produces a significant impact on the precipitation field, its contribution to the balance is relatively minor. The most significant and lasting case study, that of February 2012, has been studied in more detail in order to explain the impressive drop in the total precipitation amount simulated in the sensitivity tests with removed heat fluxes with respect to the CNTRL run. In these experiments relative humidity and potential temperature distribution over different cross-sections have been examined. With respect to the CNTRL run a drier and more stable boundary layer, characterised by a more pronounced wind shear at the lower levels, has been observed to establish above the Adriatic Sea. Finally, in order to demonstrate that also the interaction of the Bora flow with the Apennines plays a crucial role, sensitivity tests varying the orography height have been considered. The results of such sensitivity tests indicate that the propagation of the Bora wind over the Adriatic Sea, and in turn its meteorological impact over Italy, is influenced by both the large air-sea heat fluxes and the interaction with the Apennines that decelerate the upstream flow.
Resumo:
The width of the 21 cm line (HI) emitted by spiral galaxies depends on the physical processes that release energy in the Interstellar Medium (ISM). This quantity is called velocity dispersion (σ) and it is proportional first of all to the thermal kinetic energy of the gas. The accepted theoretical picture predicts that the neutral hydrogen component (HI) exists in the ISM in two stable phases: a cold one (CNM, with σ~0.8 km/s) and a warm one (WNM, with σ~8 km/s). However, this is called into question by the observation that the HI gas has usually larger velocity dispersions. This suggests the presence of turbulence in the ISM, although the energy sources remain unknown. In this thesis we want to shed new light on this topic. We have studied the HI line emission of two nearby galaxies: NGC6946 and M101. For the latter we used new deep observations obtained with the Westerbork radio interferometer. Through a gaussian fitting procedure, we produced dispersion maps of the two galaxies. For both of them, we compared the σ values measured in the spiral arms with those in the interarms. In NGC6946 we found that, in both arms and interarms, σ grows with the column density, while we obtained the opposite for M 101. Using a statistical analysis we did not find a significant difference between arm and interarm dispersion distributions. Producing star formation rate density maps (SFRD) of the galaxies, we studied their global and local relations with the HI kinetic energy, as inferred from the measured dispersions. For NGC6946 we obtained a good log-log correlation, in agreement with a simple model of supernova feedback driven turbulence. This shows that in this galaxy turbulent motions are mainly induced by the stellar activity. For M 101 we did not find an analogous correlation, since the gas kinetic energy appears constant with the SFRD. We think that this may indicate that in this galaxy turbulence is driven also by accretion of extragalactic material.
Resumo:
Background Prognostic models have been developed for patients infected with HIV-1 who start combination antiretroviral therapy (ART) in high-income countries, but not for patients in sub-Saharan Africa. We developed two prognostic models to estimate the probability of death in patients starting ART in sub-Saharan Africa. Methods We analysed data for adult patients who started ART in four scale-up programmes in Côte d'Ivoire, South Africa, and Malawi from 2004 to 2007. Patients lost to follow-up in the first year were excluded. We used Weibull survival models to construct two prognostic models: one with CD4 cell count, clinical stage, bodyweight, age, and sex (CD4 count model); and one that replaced CD4 cell count with total lymphocyte count and severity of anaemia (total lymphocyte and haemoglobin model), because CD4 cell count is not routinely measured in many African ART programmes. Death from all causes in the first year of ART was the primary outcome. Findings 912 (8·2%) of 11 153 patients died in the first year of ART. 822 patients were lost to follow-up and not included in the main analysis; 10 331 patients were analysed. Mortality was strongly associated with high baseline CD4 cell count (≥200 cells per μL vs <25; adjusted hazard ratio 0·21, 95% CI 0·17–0·27), WHO clinical stage (stages III–IV vs I–II; 3·45, 2·43–4·90), bodyweight (≥60 kg vs <45 kg; 0·23, 0·18–0·30), and anaemia status (none vs severe: 0·27, 0·20–0·36). Other independent risk factors for mortality were low total lymphocyte count, advanced age, and male sex. Probability of death at 1 year ranged from 0·9% (95% CI 0·6–1·4) to 52·5% (43·8–61·7) with the CD4 model, and from 0·9% (0·5–1·4) to 59·6% (48·2–71·4) with the total lymphocyte and haemoglobin model. Both models accurately predict early mortality in patients starting ART in sub-Saharan Africa compared with observed data. Interpretation Prognostic models should be used to counsel patients, plan health services, and predict outcomes for patients with HIV-1 infection in sub-Saharan Africa.
Resumo:
Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
This Letter presents a measurement of the W+ W- production cross section in sqrt(s) = 7 TeV pp collisions by the ATLAS experiment, using 34 pb(-1) of integrated luminosity produced by the Large Hadron Collider at CERN. Selecting events with two isolated leptons, each either an electron or a muon, 8 candidate events are observed with an expected background of 1.7 ± 0.6 events. The measured cross section is 41(-16)(+20)(stat) ± 5(syst)±1(lumi) pb, which is consistent with the standard model prediction of 44 ± 3 pb calculated at next-to-leading order in QCD.
Resumo:
This paper presents a measurement of the W+W- production cross section in pp collisions at root s = 7 TeV. The leptonic decay channels are analyzed using data corresponding to an integrated luminosity of 4: 6 fb(-1) collected with the ATLAS detector at the Large Hadron Collider. The W+W- production cross section sigma(pp -> W+W- + X) is measured to be 51.9 +/- 2.0(stat) +/- 3.9(syst) +/- 2.0(lumi) pb, compatible with the Standard Model prediction of 44.7(-1.9)(+2.1) pb. A measurement of the normalized fiducial cross section as a function of the leading lepton transverse momentum is also presented. The reconstructed transverse momentum distribution of the leading lepton is used to extract limits on anomalous WWZ and WW gamma couplings.