986 resultados para Energy measure
Resumo:
Background Physical activity in children with intellectual disabilities is a neglected area of study, which is most apparent in relation to physical activity measurement research. Although objective measures, specifically accelerometers, are widely used in research involving children with intellectual disabilities, existing research is based on measurement methods and data interpretation techniques generalised from typically developing children. However, due to physiological and biomechanical differences between these populations, questions have been raised in the existing literature on the validity of generalising data interpretation techniques from typically developing children to children with intellectual disabilities. Therefore, there is a need to conduct population-specific measurement research for children with intellectual disabilities and develop valid methods to interpret accelerometer data, which will increase our understanding of physical activity in this population. Methods Study 1: A systematic review was initially conducted to increase the knowledge base on how accelerometers were used within existing physical activity research involving children with intellectual disabilities and to identify important areas for future research. A systematic search strategy was used to identify relevant articles which used accelerometry-based monitors to quantify activity levels in ambulatory children with intellectual disabilities. Based on best practice guidelines, a novel form was developed to extract data based on 17 research components of accelerometer use. Accelerometer use in relation to best practice guidelines was calculated using percentage scores on a study-by-study and component-by-component basis. Study 2: To investigate the effect of data interpretation methods on the estimation of physical activity intensity in children with intellectual disabilities, a secondary data analysis was conducted. Nine existing sets of child-specific ActiGraph intensity cut points were applied to accelerometer data collected from 10 children with intellectual disabilities during an activity session. Four one-way repeated measures ANOVAs were used to examine differences in estimated time spent in sedentary, moderate, vigorous, and moderate to vigorous intensity activity. Post-hoc pairwise comparisons with Bonferroni adjustments were additionally used to identify where significant differences occurred. Study 3: The feasibility on a laboratory-based calibration protocol developed for typically developing children was investigated in children with intellectual disabilities. Specifically, the feasibility of activities, measurements, and recruitment was investigated. Five children with intellectual disabilities and five typically developing children participated in 14 treadmill-based and free-living activities. In addition, resting energy expenditure was measured and a treadmill-based graded exercise test was used to assess cardiorespiratory fitness. Breath-by-breath respiratory gas exchange and accelerometry were continually measured during all activities. Feasibility was assessed using observations, activity completion rates, and respiratory data. Study 4: Thirty-six children with intellectual disabilities participated in a semi-structured school-based physical activity session to calibrate accelerometry for the estimation of physical activity intensity. Participants wore a hip-mounted ActiGraph wGT3X+ accelerometer, with direct observation (SOFIT) used as the criterion measure. Receiver operating characteristic curve analyses were conducted to determine the optimal accelerometer cut points for sedentary, moderate, and vigorous intensity physical activity. Study 5: To cross-validate the calibrated cut points and compare classification accuracy with existing cut points developed in typically developing children, a sub-sample of 14 children with intellectual disabilities who participated in the school-based sessions, as described in Study 4, were included in this study. To examine the validity, classification agreement was investigated between the criterion measure of SOFIT and each set of cut points using sensitivity, specificity, total agreement, and Cohen’s kappa scores. Results Study 1: Ten full text articles were included in this review. The percentage of review criteria met ranged from 12%−47%. Various methods of accelerometer use were reported, with most use decisions not based on population-specific research. A lack of measurement research, specifically the calibration/validation of accelerometers for children with intellectual disabilities, is limiting the ability of researchers to make appropriate and valid accelerometer use decisions. Study 2: The choice of cut points had significant and clinically meaningful effects on the estimation of physical activity intensity and sedentary behaviour. For the 71-minute session, estimations for time spent in each intensity between cut points ranged from: sedentary = 9.50 (± 4.97) to 31.90 (± 6.77) minutes; moderate = 8.10 (± 4.07) to 40.40 (± 5.74) minutes; vigorous = 0.00 (± .00) to 17.40 (± 6.54) minutes; and moderate to vigorous = 8.80 (± 4.64) to 46.50 (± 6.02) minutes. Study 3: All typically developing participants and one participant with intellectual disabilities completed the protocol. No participant met the maximal criteria for the graded exercise test or attained a steady state during the resting measurements. Limitations were identified with the usability of respiratory gas exchange equipment and the validity of measurements. The school-based recruitment strategy was not effective, with a participation rate of 6%. Therefore, a laboratory-based calibration protocol was not feasible for children with intellectual disabilities. Study 4: The optimal vertical axis cut points (cpm) were ≤ 507 (sedentary), 1008−2300 (moderate), and ≥ 2301 (vigorous). Sensitivity scores ranged from 81−88%, specificity 81−85%, and AUC .87−.94. The optimal vector magnitude cut points (cpm) were ≤ 1863 (sedentary), ≥ 2610 (moderate) and ≥ 4215 (vigorous). Sensitivity scores ranged from 80−86%, specificity 77−82%, and AUC .86−.92. Therefore, the vertical axis cut points provide a higher level of accuracy in comparison to the vector magnitude cut points. Study 5: Substantial to excellent classification agreement was found for the calibrated cut points. The calibrated sedentary cut point (ĸ =.66) provided comparable classification agreement with existing cut points (ĸ =.55−.67). However, the existing moderate and vigorous cut points demonstrated low sensitivity (0.33−33.33% and 1.33−53.00%, respectively) and disproportionately high specificity (75.44−.98.12% and 94.61−100.00%, respectively), indicating that cut points developed in typically developing children are too high to accurately classify physical activity intensity in children with intellectual disabilities. Conclusions The studies reported in this thesis are the first to calibrate and validate accelerometry for the estimation of physical activity intensity in children with intellectual disabilities. In comparison with typically developing children, children with intellectual disabilities require lower cut points for the classification of moderate and vigorous intensity activity. Therefore, generalising existing cut points to children with intellectual disabilities will underestimate physical activity and introduce systematic measurement error, which could be a contributing factor to the low levels of physical activity reported for children with intellectual disabilities in previous research.
Resumo:
In vivo and in vitro experiments were conducted to determine digestibility of GE and nutrients, as well as DE and ME of carbohydrates fed to growing pigs. The objective of Exp. 1 was to determine the DE and ME of 4 novel carbohydrates fed to pigs. The 4 novel carbohydrates were 2 sources of resistant starch (RS 60 and RS 70), soluble corn fiber (SCF), and pullulan. These carbohydrates were produced to increase total dietary fiber (TDF) intake by humans. Maltodextrin (MD) was used as a highly digestible control carbohydrate. The DE and ME for RS 60 (1,779 and 1,903 kcal/kg, respectively), RS 75(1,784 and 1,677 kcal/kg, respectively), and SCF (1,936 and 1,712 kcal/kg, respectively) were less (P < 0.05) than for MD (3,465 and 3,344 kcal/kg, respectively) and pullulan (2,755 and 2,766 kcal/kg, respectively), and pullulan contained less (P < 0.05) DE and ME than MD. However, there was no difference in the DE and ME for RS 60, RS 75, and SCF. The varying degrees of small intestinal digestibility and differences in fermentability among these novel carbohydrates may explain the differences in the DE and ME among carbohydrates. Therefore, the objectives of Exp. 2 were to determine the effect of these 4 novel carbohydrates and cellulose on apparent ileal (AID) and apparent total tract (ATTD) disappearance, and hindgut disappearance (HGD) of GE, TDF, and nutrients when added to diets fed to ileal-cannulated pigs. The second objective was to measure the endogenous flow of TDF to be able to calculate the standardized ileal disappearance (SID) and standardized total tract (STTD) disappearance of TDF in the 4 novel fibers fed to pigs. Results of the experiment indicated that the AID of GE and DM in diets containing cellulose or the novel fibers was less (P < 0.05) than of the maltodextrin diet, but the ATTD of GE and DM was not different among diets. The addition of RS 60, RS 75, and SCF did not affect the AID of acid hydrolysed ether extract (AEE), CP, or ash, but the addition of cellulose and pullulan reduced (P < 0.01) the AID of CP. The average ileal and total tract endogenous losses of TDF were calculated to be 25.25 and 42.87 g/kg DMI, respectively. The SID of TDF in diets containing RS 60, SCF, and pullulan were greater (P < 0.01) than the SID of TDF in the cellulose diet, but the STTD of the SCF diet was greater (P < 0.05) than for the cellulose and pullulan diets. Results of this experiment indicate that the presence of TDF reduces small intestinal disappearance of total carbohydrates and energy which may reduce the DE and ME of diets and ingredients. Therefore, the objective of Exp. 3 was to determine the DE and ME in yellow dent corn, Nutridense corn, dehulled barley, dehulled oats, polished rice, rye, sorghum, and wheat fed to growing pigs and to determine the AID and ATTD of GE, OM, CP, AEE, starch, total carbohydrates, and TDF in these cereal grains fed to pigs. Results indicated that the AID of GE, OM, and total carbohydrates was greater (P < 0.001) in rice than in all other cereal grains. The AID of starch was also greater (P < 0.001) in rice than in yellow dent corn, dehulled barley, rye, and wheat. The ATTD of GE was greater (P < 0.001) in rice than in yellow dent corn, rye, sorghum, and wheat. With a few exceptions, the AID and ATTD of GE and nutrients in Nutridense corn was not different from the values for dehulled oats. Likewise, with a few exceptions, the AID, ATTD, and HGD of GE, OM, total carbohydrates, and TDF in yellow corn, sorghum, and wheat were not different from each other. The AID of GE and AEE in dehulled barley was greater (P < 0.001) than in rye. The ATTD of GE and most nutrients was greater (P < 0.001) in dehulled barley than in rye. Dehulled oats had the greatest (P < 0.001) ME (kcal/kg DM) whereas rye had the least ME (kcal/kg DM) among the cereal grains. Results of the experiment indicate that the presence of TDF and RS may reduce small intestinal digestibility of starch in cereal grains resulting in reduced DE and ME in these grains. Digestibility experiments involving animals are time consuming and expensive. Therefore, the objective of Exp. 4 was to correlate DM and OM digestibility obtained from 3 in vitro procedures with ATTD of GE and with the concentration of DE in 50 corn samples that were fed to growing pigs. The second objective was to develop a regression model that can predict the ATTD of GE or the concentration of DE in corn. The third objective was to evaluate the suitability of using the DaisyII incubator as an alternative to the traditional water bath when determining in vitro DM and OM digestibility. Results indicated that corn samples incubated with Viscozyme for 48 h in the DaisyII incubator improved (P < 0.001) the ability of the procedure to detect small differences in the ATTD of GE or to detect small differences in the concentration of DE in corn. Likewise, compared with using cellulase or fecal inoculum, the variability in the ATTD of GE and the variability in the DE in corn was better (R2 = 0.56; P < 0.05 and R2 = 0.53; P < 0.06, respectively) explained if Viscozyme was used than if cellulase or fecal inoculum was used. A validated regression model that predicted the DE in corn was developed using Viscozyme and with the corn samples incubated in the DaisyII incubator for a 48 h. In conclusion, this present work used the pig as a model for human gastrointestinal function and evaluates carbohydrates from 2 different nutritional perspectives – humans and animals. The addition of novel carbohydrates reduced the digestibility of energy in the diets without necessarily reducing the digestibility of other nutrients. Thus, supplementation of novel carbohydrates in the diets may be beneficial for the management of diabetes. Aside from diabetic management, cereal grains such as rye and sorghum, may also help in BW management because of there low caloric value, but for undernourished individuals, dehulled oats, dehulled barley, and rice are the ideal grains. From an animal nutrition standpoint, high concentration of dietary fiber is undesirable because it reduces feed efficiency. Therefore, the inclusion of feed ingredients that have a high concentration of dietary fiber is often limited in animal diets. Although in vivo determination is ideal, in vitro procedures are useful tools to determine caloric value of food and feed ingredients.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
(English)The Swedish industrial sector has overcome the oil crisis and has maintained the energy use constant even though the production has grown. This has been achieved thanks to the development of several energy policies, by the Swedish government, towards the 2020 goals. This thesis carries on this path and performs an energy audit for an old industrial building in Gävle (Sweden) in order to propose different energy efficiency measures to use less energy while maintaining the thermal comfort. The building is in quite a bad shape and some of the areas are unused making them a waste of money. By means of the invoices provided by different companies, the information from the staff and some measures that have been carried out in-situ, the energy balance has been calculated from where conclusions have been drawn. Although it is an industrial building, the study is not going to be focused in the industrial process but in the building’s envelope and support processes, since the unit combines both production and office areas. Therefore, the energy balance is divided in energy supplies (district heating, free heating and sun irradiation) and energy losses (transmission, ventilation hot tap water and infiltrations). The results show that the most important supply is that of the DH whereas the most important losses are the transmission and infiltration. Thus, the measures proposed are focused on the reduction of this relevant parameters. The most important measures are the renovation of the windows, heating systems valves and the ventilation. The glazing of the dwelling is old and some of it is broken accounting for quite a large amount of the losses. The radiator valves are not properly working and there does not exist any temperature control. Therefore the installation of thermostatic valves turns out to be a must. Moreover, some part of the building has no mechanical ventilation but conserves the ducts. These could be utilized if they are connected to the workshop’s ventilation which is capable of generating sufficient flow for the entire building. Finally, although other measures could also be carried out, the ones proposed appear to be the essential ones. A further analysis should be carried out in order to analyze the payback time or investment capability of the company so as to decide between one measure or another. A market study for possible new tenants for the unused parts of the building is also advisable.
Resumo:
Ligand-protein docking is an optimization problem based on predicting the position of a ligand with the lowest binding energy in the active site of the receptor. Molecular docking problems are traditionally tackled with single-objective, as well as with multi-objective approaches, to minimize the binding energy. In this paper, we propose a novel multi-objective formulation that considers: the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands and the binding (intermolecular) energy, as two objectives to evaluate the quality of the ligand-protein interactions. To determine the kind of Pareto front approximations that can be obtained, we have selected a set of representative multi-objective algorithms such as NSGA-II, SMPSO, GDE3, and MOEA/D. Their performances have been assessed by applying two main quality indicators intended to measure convergence and diversity of the fronts. In addition, a comparison with LGA, a reference single-objective evolutionary algorithm for molecular docking (AutoDock) is carried out. In general, SMPSO shows the best overall results in terms of energy and RMSD (value lower than 2A for successful docking results). This new multi-objective approach shows an improvement over the ligand-protein docking predictions that could be promising in in silico docking studies to select new anticancer compounds for therapeutic targets that are multidrug resistant.
Resumo:
Time-optimal response is an important and sometimes necessary characteristic of dynamic systems for specific applications. Power converters are widely used in different electrical systems and their dynamic response will affect the whole system. In many electrical systems like microgrids or voltage regulators which supplies sensitive loads fast dynamic response is a must. Minimum time is the fastest converter to compensate the step output reference or load change. Boost converters as one of the wildly used power converters in the electrical systems are aimed to be controlled in optimal time in this study. Linear controllers are not able to provide the optimal response for a boost converter however they are still useful and functional for other applications like reference tracking or stabilization. To obtain the fastest possible response from boost converters, a nonlinear control approach based on the total energy of the system is studied in this research. Total energy of the system considers as the basis for developing the presented method, since it is easy and accurate to measure besides that the total energy of the system represents the actual operating condition of the boost converter. The detailed model of a boost converter is simulated in MATLAB/Simulink to achieve the time optimal response of the boost converter by applying the developed method. The simulation results confirmed the ability of the presented method to secure the time optimal response of the boost converter under four different scenarios.
Resumo:
The United States of America is making great efforts to transform the renewable and abundant biomass resources into cost-competitive, high-performance biofuels, bioproducts, and biopower. This is the key to increase domestic production of transportation fuels and renewable energy, and reduce greenhouse gas and other pollutant emissions. This dissertation focuses specifically on assessing the life cycle environmental impacts of biofuels and bioenergy produced from renewable feedstocks, such as lignocellulosic biomass, renewable oils and fats. The first part of the dissertation presents the life cycle greenhouse gas (GHG) emissions and energy demands of renewable diesel (RD) and hydroprocessed jet fuels (HRJ). The feedstocks include soybean, camelina, field pennycress, jatropha, algae, tallow and etc. Results show that RD and HRJ produced from these feedstocks reduce GHG emissions by over 50% compared to comparably performing petroleum fuels. Fossil energy requirements are also significantly reduced. The second part of this dissertation discusses the life cycle GHG emissions, energy demands and other environmental aspects of pyrolysis oil as well as pyrolysis oil derived biofuels and bioenergy. The feedstocks include waste materials such as sawmill residues, logging residues, sugarcane bagasse and corn stover, and short rotation forestry feedstocks such as hybrid poplar and willow. These LCA results show that as much as 98% GHG emission savings is possible relative to a petroleum heavy fuel oil. Life cycle GHG savings of 77 to 99% were estimated for power generation from pyrolysis oil combustion relative to fossil fuels combustion for electricity, depending on the biomass feedstock and combustion technologies used. Transportation fuels hydroprocessed from pyrolysis oil show over 60% of GHG reductions compared to petroleum gasoline and diesel. The energy required to produce pyrolysis oil and pyrolysis oil derived biofuels and bioelectricity are mainly from renewable biomass, as opposed to fossil energy. Other environmental benefits include human health, ecosystem quality and fossil resources. The third part of the dissertation addresses the direct land use change (dLUC) impact of forest based biofuels and bioenergy. An intensive harvest of aspen in Michigan is investigated to understand the GHG mitigation with biofuels and bioenergy production. The study shows that the intensive harvest of aspen in MI compared to business as usual (BAU) harvesting can produce 18.5 billion gallons of ethanol to blend with gasoline for the transport sector over the next 250 years, or 32.2 billion gallons of bio-oil by the fast pyrolysis process, which can be combusted to generate electricity or upgraded to gasoline and diesel. Intensive harvesting of these forests can result in carbon loss initially in the aspen forest, but eventually accumulates more carbon in the ecosystem, which translates to a CO2 credit from the dLUC impact. Time required for the forest-based biofuels to reach carbon neutrality is approximately 60 years. The last part of the dissertation describes the use of depolymerization model as a tool to understand the kinetic behavior of hemicellulose hydrolysis under dilute acid conditions. Experiments are carried out to measure the concentrations of xylose and xylooligomers during dilute acid hydrolysis of aspen. The experiment data are used to fine tune the parameters of the depolymerization model. The results show that the depolymerization model successfully predicts the xylose monomer profile in the reaction, however, it overestimates the concentrations of xylooligomers.
Resumo:
In this paper, we measure the degree of fractional integration in final energy demand in Portugal using an ARFIMA model with and without adjustments for seasonality. We consider aggregate energy demand as well as final demand for petroleum, electricity, coal, and natural gas. Our findings suggest the presence of long memory in all of the components of energy demand. All fractional-difference parameters are positive and lower than 0.5 indicating that the series are stationary, although with mean reversion patterns slower than in the typical short-run processes. These results have important implications for the design of energy policies. As a result of the long-memory in final energy demand, the effects of temporary policy shocks will tend to disappear slowly. This means that even transitory shocks have long lasting effects. Given the temporary nature of these effects, however, permanent effects on final energy demand require permanent policies. This is unlike what would be suggested by the more standard, but much more limited, unit root approach, which would incorrectly indicate that even transitory policies would have permanent effects
Resumo:
The enhanced production of strange hadrons in heavy-ion collisions relative to that in minimum-bias pp collisions is historically considered one of the first signatures of the formation of a deconfined quark-gluon plasma. At the LHC, the ALICE experiment observed that the ratio of strange to non-strange hadron yields increases with the charged-particle multiplicity at midrapidity, starting from pp collisions and evolving smoothly across interaction systems and energies, ultimately reaching Pb-Pb collisions. The understanding of the origin of this effect in small systems remains an open question. This thesis presents a comprehensive study of the production of $K^{0}_{S}$, $\Lambda$ ($\bar{\Lambda}$) and $\Xi^{-}$ ($\bar{\Xi}^{+}$) strange hadrons in pp collisions at $\sqrt{s}$ = 13 TeV collected in LHC Run 2 with ALICE. A novel approach is exploited, introducing, for the first time, the concept of effective energy in the study of strangeness production in hadronic collisions at the LHC. In this work, the ALICE Zero Degree Calorimeters are used to measure the energy carried by forward emitted baryons in pp collisions, which reduces the effective energy available for particle production with respect to the nominal centre-of-mass energy. The results presented in this thesis provide new insights into the interplay, for strangeness production, between the initial stages of the collision and the produced final hadronic state. Finally, the first Run 3 results on the production of $\Omega^{\pm}$ ($\bar{\Omega}^{+}$) multi-strange baryons are presented, measured in pp collisions at $\sqrt{s}$ = 13.6 TeV and 900 GeV, the highest and lowest collision energies reached so far at the LHC. This thesis also presents the development and validation of the ALICE Time-Of-Flight (TOF) data quality monitoring system for LHC Run 3. This work was fundamental to assess the performance of the TOF detector during the commissioning phase, in the Long Shutdown 2, and during the data taking period.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
Rapidity-odd directed flow (v1) measurements for charged pions, protons, and antiprotons near midrapidity (y=0) are reported in sNN=7.7, 11.5, 19.6, 27, 39, 62.4, and 200 GeV Au+Au collisions as recorded by the STAR detector at the Relativistic Heavy Ion Collider. At intermediate impact parameters, the proton and net-proton slope parameter dv1/dy|y=0 shows a minimum between 11.5 and 19.6 GeV. In addition, the net-proton dv1/dy|y=0 changes sign twice between 7.7 and 39 GeV. The proton and net-proton results qualitatively resemble predictions of a hydrodynamic model with a first-order phase transition from hadronic matter to deconfined matter, and differ from hadronic transport calculations.
Resumo:
The control of energy homeostasis relies on robust neuronal circuits that regulate food intake and energy expenditure. Although the physiology of these circuits is well understood, the molecular and cellular response of this program to chronic diseases is still largely unclear. Hypothalamic inflammation has emerged as a major driver of energy homeostasis dysfunction in both obesity and anorexia. Importantly, this inflammation disrupts the action of metabolic signals promoting anabolism or supporting catabolism. In this review, we address the evidence that favors hypothalamic inflammation as a factor that resets energy homeostasis in pathological states.
Resumo:
Local parity-odd domains are theorized to form inside a quark-gluon plasma which has been produced in high-energy heavy-ion collisions. The local parity-odd domains manifest themselves as charge separation along the magnetic field axis via the chiral magnetic effect. The experimental observation of charge separation has previously been reported for heavy-ion collisions at the top RHIC energies. In this Letter, we present the results of the beam-energy dependence of the charge correlations in Au+Au collisions at midrapidity for center-of-mass energies of 7.7, 11.5, 19.6, 27, 39, and 62.4 GeV from the STAR experiment. After background subtraction, the signal gradually reduces with decreased beam energy and tends to vanish by 7.7 GeV. This implies the dominance of hadronic interactions over partonic ones at lower collision energies.
Resumo:
Cardiac arrest after open surgery has an incidence of approximately 3%, of which more than 50% of the cases are due to ventricular fibrillation. Electrical defibrillation is the most effective therapy for terminating cardiac arrhythmias associated with unstable hemodynamics. The excitation threshold of myocardial microstructures is lower when external electrical fields are applied in the longitudinal direction with respect to the major axis of cells. However, in the heart, cell bundles are disposed in several directions. Improved myocardial excitation and defibrillation have been achieved by applying shocks in multiple directions via intracardiac leads, but the results are controversial when the electrodes are not located within the cardiac chambers. This study was designed to test whether rapidly switching shock delivery in 3 directions could increase the efficiency of direct defibrillation. A multidirectional defibrillator and paddles bearing 3 electrodes each were developed and used in vivo for the reversal of electrically induced ventricular fibrillation in an anesthetized open-chest swine model. Direct defibrillation was performed by unidirectional and multidirectional shocks applied in an alternating fashion. Survival analysis was used to estimate the relationship between the probability of defibrillation and the shock energy. Compared with shock delivery in a single direction in the same animal population, the shock energy required for multidirectional defibrillation was 20% to 30% lower (P < .05) within a wide range of success probabilities. Rapidly switching multidirectional shock delivery required lower shock energy for ventricular fibrillation termination and may be a safer alternative for restoring cardiac sinus rhythm.
Resumo:
We report the first measurements of the moments--mean (M), variance (σ(2)), skewness (S), and kurtosis (κ)--of the net-charge multiplicity distributions at midrapidity in Au+Au collisions at seven energies, ranging from sqrt[sNN]=7.7 to 200 GeV, as a part of the Beam Energy Scan program at RHIC. The moments are related to the thermodynamic susceptibilities of net charge, and are sensitive to the location of the QCD critical point. We compare the products of the moments, σ(2)/M, Sσ, and κσ(2), with the expectations from Poisson and negative binomial distributions (NBDs). The Sσ values deviate from the Poisson baseline and are close to the NBD baseline, while the κσ(2) values tend to lie between the two. Within the present uncertainties, our data do not show nonmonotonic behavior as a function of collision energy. These measurements provide a valuable tool to extract the freeze-out parameters in heavy-ion collisions by comparing with theoretical models.