993 resultados para Two-loop-calculations, LEP, ILC


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power calculations in a small sample comparative study, with a continuous outcome measure, are typically undertaken using the asymptotic distribution of the test statistic. When the sample size is small, this asymptotic result can be a poor approximation. An alternative approach, using a rank based test statistic, is an exact power calculation. When the number of groups is greater than two, the number of calculations required to perform an exact power calculation is prohibitive. To reduce the computational burden, a Monte Carlo resampling procedure is used to approximate the exact power function of a k-sample rank test statistic under the family of Lehmann alternative hypotheses. The motivating example for this approach is the design of animal studies, where the number of animals per group is typically small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outcome-dependent, two-phase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and influence functions for the semiparametric regression models studied by Lawless, Kalbfleisch, and Wild (1999) under two-phase sampling designs. We show that the maximum likelihood estimators for both the parametric and nonparametric parts of the model are asymptotically normal and efficient. The efficient influence function for the parametric part aggress with the more general information bound calculations of Robins, Hsieh, and Newey (1995). By verifying the conditions of Murphy and Van der Vaart (2000) for a least favorable parametric submodel, we provide asymptotic justification for statistical inference based on profile likelihood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A push to reduce dependency on foreign energy and increase the use of renewable energy has many gas stations pumping ethanol blended fuels. Recreational engines typically have less complex fuel management systems than that of the automotive sector. This prevents the engine from being able to adapt to different ethanol concentrations. Using ethanol blended fuels in recreational engines raises several consumer concerns. Engine performance and emissions are both affected by ethanol blended fuels. This research focused on assessing the impact of E22 on two-stroke and four-stroke snowmobiles. Three snowmobiles were used for this study. A 2009 Arctic Cat Z1 Turbo with a closed-loop fuel injection system, a 2009 Yamaha Apex with an open-loop fuel injection system and a 2010 Polaris Rush with an open-loop fuel injection system were used to determine the impact of E22 on snowmobile engines. A five mode emissions test was conducted on each of the snowmobiles with E0 and E22 to determine the impact of the E22 fuel. All of the snowmobiles were left in stock form to assess the effect of E22 on snowmobiles currently on the trail. Brake specific emissions of the snowmobiles running on E22 were compared to that of the E0 fuel. Engine parameters such as exhaust gas temperature, fuel flow, and relative air to fuel ratio (λ) were also compared on all three snowmobiles. Combustion data using an AVL combustion analysis system was taken on the Polaris Rush. This was done to compare in-cylinder pressures, combustion duration, and location of 50% mass fraction burn. E22 decreased total hydrocarbons and carbon monoxide for all of the snowmobiles and increased carbon dioxide. Peak power increased for the closed-loop fuel injected Arctic Cat. A smaller increase of peak power was observed for the Polaris due to a partial ability of the fuel management system to adapt to ethanol. A decrease in peak power was observed for the open-loop fuel injected Yamaha.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to explore a Kalman Filter approach to estimating swing of crane-suspended loads. Measuring real-time swing is needed to implement swing damping control strategies where crane joints are used to remove energy from a swinging load. The typical solution to measuring swing uses an inertial sensor attached to the hook block. Measured hook block twist is used to resolve the other two sensed body rates into tangential and radial swing. Uncertainty in the twist measurement leads to inaccurate tangential and radial swing calculations and ineffective swing damping. A typical mitigation approach is to bandpass the inertial sensor readings to remove low frequency drift and high frequency noise. The center frequency of the bandpass filter is usually designed to track the load length and the pass band width set to trade off performance with damping loop gain. The Kalman Filter approach developed here allows all swing motions (radial, tangential and twist) to be measured without the use of a bandpass filter. This provides an alternate solution for swing damping control implementation. After developing a Kalman Filter solution for a two-dimensional swing scenario, the three-dimensional system is considered where simplifying assumptions, suggested by the two-dimensional study, are exploited. One of the interesting aspects of the three-dimensional study is the hook block twist model. Unlike the mass-independence of a pendulum's natural frequency, the twist natural frequency depends both on the pendulum length and the load’s mass distribution. The linear Kalman Filter is applied to experimental data demonstrating the ability to extract the individual swing components for complex motions. It should be noted that the three-dimensional simplifying assumptions preclude the ability to measure two "secondary" hook block rotations. The ability to segregate these motions from the primary swing degrees of freedom was illustrated in the two-dimensional study and could be included into the three-dimensional solution if they were found to be important for a particular application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Whether or not a protective stoma reduces the rate of anastomotic leakage after distal colorectal anastomosis is still discussed controversially. It does however facilitate clinical management once leakage has occurred. Loop ileostomies seem to be associated with a lower morbidity and a better quality of life compared to loop colostomies. Generally, diverting loop ileostomies are secured at skin level by means of a supporting device in order to prevent retraction of the ileostomy into the abdomen. However, due to the supporting rod, difficulties may occur in applying a stoma bag correctly and leakage of faeces onto the skin may occur even with correct eversion of the afferent limb. Our aim was to compare morbidity and time to self-sufficient stoma-care in patients having a loop ileostomy with rod to those without rod. Methods: A total of 60 patients necessitating loop ileostomy were analyzed. Patients received surgery in of the two involved institutions according to inhouse standard procedures. 30 patients had an ileostomy with rod (VCHK Inselspital) and a further 30 without rod (KSW Winterthur). Morbidity and time to self-sufficiency regarding stoma care was analyzed during the first 90 postoperative days. Morbidity was determined according to a scoring system ranging from 0 to 4 points for any given set of possible complications (bleeding, necrosis, skin irritation, abscess, stenosis, retraction, fistula, prolapse, parastomal hernia, incomplete diversion), where 0 = no complication and 4 = severe complication. Continuous variables were expressed as median (95% Confidence Interval). For comparisons between the groups the Mann-Whitney U test was used, between categorical variables the X2 test was applied. Results: There were no significant differences in length of hospital stay or time to self-sufficient stoma-care between the groups. Although not significant, patients with a rod ileostomy had a tendency towards more stoma-related complications as well as stoma-related reoperations. The number of patients reaching total self-sufficiency regarding stoma care was higher after rodless ileostomy. Conclusion: According to our data, rodless ileostomies seemto fare just as well as those with a supporting rod, with equal morbidity rates and more patients reaching self-sufficient stoma care. Therefore routine application of a rod for diverting loop ileostomy seems unnecessary

Relevância:

30.00% 30.00%

Publicador:

Resumo:

More than eighteen percent of the world’s population lives without reliable access to clean water, forced to walk long distances to get small amounts of contaminated surface water. Carrying heavy loads of water long distances and ingesting contaminated water can lead to long-term health problems and even death. These problems affect the most vulnerable populations, women, children, and the elderly, more than anyone else. Water access is one of the most pressing issues in development today. Boajibu, a small village in Sierra Leone, where the author served in Peace Corps for two years, lacks access to clean water. Construction of a water distribution system was halted when a civil war broke out in 1992 and has not been continued since. The community currently relies on hand-dug and borehole wells that can become dirty during the dry season, which forces people to drink contaminated water or to travel a far distance to collect clean water. This report is intended to provide a design the system as it was meant to be built. The water system design was completed based on the taps present, interviews with local community leaders, local surveying, and points taken with a GPS. The design is a gravity-fed branched water system, supplied by a natural spring on a hill adjacent to Boajibu. The system’s source is a natural spring on a hill above Boajibu, but the flow rate of the spring is unknown. There has to be enough flow from the spring over a 24-hour period to meet the demands of the users on a daily basis, or what is called providing continuous flow. If the spring has less than this amount of flow, the system must provide intermittent flow, flow that is restricted to a few hours a day. A minimum flow rate of 2.1 liters per second was found to be necessary to provide continuous flow to the users of Boajibu. If this flow is not met, intermittent flow can be provided to the users. In order to aid the construction of a distribution system in the absence of someone with formal engineering training, a table was created detailing water storage tank sizing based on possible source flow rates. A builder can interpolate using the source flow rate found to get the tank size from the table. However, any flow rate below 2.1 liters per second cannot be used in the table. In this case, the builder should size the tank such that it can take in the water that will be supplied overnight, as all the water will be drained during the day because the users will demand more than the spring can supply through the night. In the developing world, there is often a problem collecting enough money to fund large infrastructure projects, such as a water distribution system. Often there is only enough money to add only one or two loops to a water distribution system. It is helpful to know where these one or two loops can be most effectively placed in the system. Various possible loops were designated for the Boajibu water distribution system and the Adaptive Greedy Heuristic Loop Addition Selection Algorithm (AGHLASA) was used to rank the effectiveness of the possible loops to construct. Loop 1 which was furthest upstream was selected because it benefitted the most people for the least cost. While loops which were further downstream were found to be less effective because they would benefit fewer people. Further studies should be conducted on the water use habits of the people of Boajibu to more accurately predict the demands that will be placed on the system. Further population surveying should also be conducted to predict population change over time so that the appropriate capacity can be built into the system to accommodate future growth. The flow at the spring should be measured using a V-notch weir and the system adjusted accordingly. Future studies can be completed adjusting the loop ranking method so that two users who may be using the water system for different lengths of time are not counted the same and vulnerable users are weighted more heavily than more robust users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the Wilson loop. General arguments tell us that the lowest lying spectral peak encodes, through its position and shape, the real and imaginary parts of this complex potential. Here we benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. In other words, we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real and imaginary parts and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. Access to the full spectrum allows an investigation of spectral features that do not contribute to the potential but can pose a challenge to numerical attempts of an analytic continuation from imaginary time data. Differences in these contributions between the Wilson loop and gauge fixed Wilson line correlators are discussed. To better understand the difficulties in a numerical extraction we deploy the maximum entropy method with extended search space to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary parts are reproduced. Possible venues for improvement of the extraction strategy are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Contagious Bovine Pleuropneumonia (CBPP) is the most important chronic pulmonary disease of cattle on the African continent causing severe economic losses. The disease, caused by infection with Mycoplasma mycoides subsp. mycoides is transmitted by animal contact and develops slowly into a chronic form preventing an early clinical diagnosis. Because available vaccines confer a low protection rate and short-lived immunity, the rapid diagnosis of infected animals combined with traditional curbing measures is seen as the best way to control the disease. While traditional labour-intensive bacteriological methods for the detection of M. mycoides subsp. mycoides have been replaced by molecular genetic techniques in the last two decades, these latter approaches require well-equipped laboratories and specialized personnel for the diagnosis. This is a handicap in areas where CBPP is endemic and early diagnosis is essential. RESULTS We present a rapid, sensitive and specific diagnostic tool for M. mycoides subsp. mycoides detection based on isothermal loop-mediated amplification (LAMP) that is applicable to field conditions. The primer set developed is highly specific and sensitive enough to diagnose clinical cases without prior cultivation of the organism. The LAMP assay detects M. mycoides subsp. mycoides DNA directly from crude samples of pulmonary/pleural fluids and serum/plasma within an hour using a simple dilution protocol. A photometric detection of LAMP products allows the real-time visualisation of the amplification curve and the application of a melting curve/re-association analysis presents a means of quality assurance based on the predetermined strand-inherent temperature profile supporting the diagnosis. CONCLUSION The CBPP LAMP developed in a robust kit format can be run on a battery-driven mobile device to rapidly detect M. mycoides subsp. mycoides infections from clinical or post mortem samples. The stringent innate quality control allows a conclusive on-site diagnosis of CBPP such as during farm or slaughter house inspections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.