969 resultados para Probability Density Function
Resumo:
The ALICE Collaboration reports the measurement of the relative J/psi yield as a function of charged particle pseudorapidity density dN(ch)/d eta in pp collisions at root s = 7 TeV at the LHC. J/psi particles are detected for p(t) > 0, in the rapidity interval vertical bar y vertical bar < 0.9 via decay into e(+)e(-), and in the interval 2.5 < y < 4.0 via decay into mu(+)/mu(-) pairs. An approximately linear increase of the J/psi yields normalized to their event average (dN(J/psi)/dy)/(dN(J/psi)/dy) with (dN(ch)/c eta)/(dN(ch)/d eta) is observed in both rapidity ranges, where dN(ch)/d eta is measured within vertical bar eta vertical bar < 1 and p(t) > 0. In the highest multiplicity interval with (dN(ch)/d eta)(bin)) = 24.1, corresponding to four times the minimum bias multiplicity density, an enhancement relative to the minimum bias J/psi yield by a factor of about 5 at 2.5 < y <4 (8 at vertical bar y vertical bar < 0.9) is observed. (C) 2012 CERN. Published by Elsevier B.V. All rights reserved.
Resumo:
Structural and electronic properties of the PtnTM55-n (TM = Co, Rh, Au) nanoalloys are investigated using density functional theory within the generalized gradient approximation and employing the all-electron projected augmented wave method. For TM = Co and Rh, the excess energy, which measures the relative energy stability of the nanoalloys, is negative for all Pt compositions. We found that the excess energy has similar values for a wide range of Pt compositions, i.e., n = 20-42 and n = 28-42 for Co and Rh, respectively, with the core shell icosahedron-like configuration (n = 42) being slightly more stable for both Co and Rh systems because of the larger release of the strain energy due to the smaller atomic size of the Co and Rh atoms. For TM = Au, the excess energy is positive for all compositions, except for n = 13, which is energetically favorable due to the formation of the core-shell structure (Pt in the core and Au atoms at the surface). Thus, our calculations confirm that the formation of core-shell structures plays an important role to increase the stability of nanoalloys. The center of gravity of the occupied d-states changes almost linearly as a function of the Pt composition, and hence, based on the d-band model, the magnitude of the adsorption energy of an adsorbate can be tuned by changing the Pt composition. The magnetic moments of PtnCo55-n decrease almost linearly as a function of the Pt composition; however, the same does not hold for PtRh and PtAu. We found an enhancement of the magnetic moments of PtRh by a few times by increasing Pt composition, which we explain by the compression effects induced by the large size of the Pt atoms compared with the Rh atoms.
Resumo:
Modifications in low-density lipoprotein (LDL) have emerged as a major pathogenic factor of atherosclerosis, which is the main cause of morbidity and mortality in the western world. Measurements of the heat diffusivity of human LDL solutions in their native and in vitro oxidized states are presented by using the Z-Scan (ZS) technique. Other complementary techniques were used to obtain the physical parameters necessary to interpret the optical results, e. g., pycnometry, refractometry, calorimetry, and spectrophotometry, and to understand the oxidation phase of LDL particles. To determine the sample's thermal diffusivity using the thermal lens model, an iterative one-parameter fitting method is proposed which takes into account several characteristic ZS time-dependent and the position-dependent transmittance measurements. Results show that the thermal diffusivity increases as a function of the LDL oxidation degree, which can be explained by the increase of the hydroperoxides production due to the oxidation process. The oxidation products go from one LDL to another, disseminating the oxidation process and caring the heat across the sample. This phenomenon leads to a quick thermal homogenization of the sample, avoiding the formation of the thermal lens in highly oxidized LDL solutions. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.JBO.17.10.105003]
Resumo:
Purpose: Dyslipidemia is characterized by high lipid blood levels that are risk factors for cardiovascular diseases, which are leading causes of death. However, it is unclear whether dyslipidemia is a cause of the dry eye syndrome (DES). Therefore we determined in transgenic mice models of dyslipidemia, whether there is an association with DES development. Methods: Dyslipidemic models included male and female adult mice overexpressing apolipoprotein CIII (Apo CIII), LDL receptor knockout (LDLR-KO) and ApoE knockout (ApoE-KO). They were compared with age-and gender-matched C57BL/6 mice. Ocular health was evaluated based on corneal slit lamp assessment, phenol red thread test (PRT) and impression cytology. Blood lipid profiles and histology of meibomian and lacrimal glands were also evaluated. Effects of high-fat diet and aging were observed in LDLR-KO and ApoCIII strains, respectively. Results: Body weight and lacrimal gland weight were significantly higher in male mice compared to females of the same strain (P < 0.05). Body weight was significantly lower in LDLRKO mice receiving high lipid diet compared to their controls (P = 0.0043). ApoE-KO were hypercholesterolemic and ApoCIII hypertriglyceridemic while LDLR-KO showed increases in both parameters. The PRT test was lower in male LDLR-KO mice with high-fat diet than control mice with standard diet (P = 0.0273). Aging did not affect lacrimal structural or functional parameters of ApoCIII strain. Conclusions: DES development is not solely dependent on dyslipidemia in relevant mice models promoting this condition. On the other hand, lacrimal gland structure and function are differentially impacted by lipid profile changes in male and female mice. This dissociation suggests that other factors beside dyslipidemia impact on tear film dysfunction and DES development.
Resumo:
The aim of this study was to evaluate the immunoexpression of MMP-2, MMP-9 and CD31/microvascular density in squamous cell carcinomas of the floor of the mouth and to correlate the results with demographic, survival, clinical (TNM staging) and histopathological variables (tumor grade, perineural invasion, embolization and bone invasion). Data from medical records and diagnoses of 41 patients were reviewed. Histological sections were subjected to immunostaining using primary antibodies for human MMP-2, MMP-9 and CD31 and streptavidin-biotin-immunoperoxidase system. Histomorphometric analyses quantified positivity for MMPs (20 fields per slide, 100?points grade, ×200) and for CD31 (microvessels <50?µm in the area of the highest vascularization, 5 fields per slide, 100?points grade, ×400). Statistical design was composed by non-parametric Mann-Whitney U test (investigating the association between numerical variables and immunostainings), chi-square frequency test (in contingency tables), Fisher's exact test (when at least one expected frequency was less than 5 in 2×2 tables), Kaplan-Meier method (estimated probabilities of overall survival) and Iogrank test (comparison of survival curves), all with a significance level of 5%. There was a statistically significant correlation between immunostaining for MMP-2 and lymph node metastasis. Factors associated negatively with survival were N stage, histopathological grade, perineural invasion and immunostaining for MMP-9. There was no significant association between immunoexpression of CD31 and the other variables. The intensity of immunostaining for MMP-2 can be indicative of metastasis in lymph nodes and for MMP-9 of a lower probability of survival
Resumo:
The rheological behavior and density of goat milk was studied as a function of solids concentration (10.5 to 50.0%) and temperature (273 to 331 k). Newtonian behavior was observed for values of total solids (TS) between 10.5 and 22.0% and temperatures from 276 to 331 k changing to pseudoplastic behavior without yield stress for TS from 25.0 to 39.4% at the same range of temperature. Goat milk with TS between 44.3 to 50.0% and temperatures of 273 to 296 k showed yield stress in addition to pseudoplastic behavior. At 303 to 331 k the power law model was observed again, without yield stress. The density of goat milk ranged from 991.7 to 1232.4 kg.m-3.
Resumo:
This work addresses the treatment of lower density regions of structures undergoing large deformations during the design process by the topology optimization method (TOM) based on the finite element method. During the design process the nonlinear elastic behavior of the structure is based on exact kinematics. The material model applied in the TOM is based on the solid isotropic microstructure with penalization approach. No void elements are deleted and all internal forces of the nodes surrounding the void elements are considered during the nonlinear equilibrium solution. The distribution of design variables is solved through the method of moving asymptotes, in which the sensitivity of the objective function is obtained directly. In addition, a continuation function and a nonlinear projection function are invoked to obtain a checkerboard free and mesh independent design. 2D examples with both plane strain and plane stress conditions hypothesis are presented and compared. The problem of instability is overcome by adopting a polyconvex constitutive model in conjunction with a suggested relaxation function to stabilize the excessive distorted elements. The exact tangent stiffness matrix is used. The optimal topology results are compared to the results obtained by using the classical Saint Venant–Kirchhoff constitutive law, and strong differences are found.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipole–dipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..
Resumo:
Particle concentration is a principal factor that affects erosion rate of solid surfaces under particle impact, such as pipe bends in pneumatic conveyors; it is well known that a reduction in the specific erosion rate occurs under high particle concentrations, a phenomenon referred to as the “shielding effect”. The cause of shielding is believed to be increased likelihood of inter-particulate collisions, the high collision probability between incoming and rebounding particles reducing the frequency and the severity of particle impacts on the target surface. In this study, the effects of particle concentration on erosion of a mild steel bend surface have been investigated in detail using three different particulate materials on an industrial scale pneumatic conveying test rig. The materials were studied so that two had the same particle density but very different particle size, whereas two had very similar particle size but very different particle density. Experimental results confirm the shielding effect due to high particle concentration and show that the particle density has a far more significant influence than the particle size, on the magnitude of the shielding effect. A new method of correcting for change in erosivity of the particles in repeated handling, to take this factor out of the data, has been established, and appears to be successful. Moreover, a novel empirical model of the shielding effects has been used, in term of erosion resistance which appears to decrease linearly when the particle concentration decreases. With the model it is possible to find the specific erosion rate when the particle concentration tends to zero, and conversely predict how the specific erosion rate changes at finite values of particle concentration; this is critical to enable component life to be predicted from erosion tester results, as the variation of the shielding effect with concentration is different in these two scenarios. In addition a previously unreported phenomenon has been recorded, of a particulate material whose erosivity has steadily increased during repeated impacts.
Resumo:
The proton-nucleus elastic scattering at intermediate energies is a well-established method for the investigation of the nuclear matter distribution in stable nuclei and was recently applied also for the investigation of radioactive nuclei using the method of inverse kinematics. In the current experiment, the differential cross sections for proton elastic scattering on the isotopes $^{7,9,10,11,12,14}$Be and $^8$B were measured. The experiment was performed using the fragment separator at GSI, Darmstadt to produce the radioactive beams. The main part of the experimental setup was the time projection ionization chamber IKAR which was simultaneously used as hydrogen target and a detector for the recoil protons. Auxiliary detectors for projectile tracking and isotope identification were also installed. As results from the experiment, the absolute differential cross sections d$sigma$/d$t$ as a function of the four momentum transfer $t$ were obtained. In this work the differential cross sections for elastic p-$^{12}$Be, p-$^{14}$Be and p-$^{8}$B scattering at low $t$ ($t leq$~0.05~(GeV/c)$^2$) are presented. The measured cross sections were analyzed within the Glauber multiple-scattering theory using different density parameterizations, and the nuclear matter density distributions and radii of the investigated isotopes were determined. The analysis of the differential cross section for the isotope $^{14}$Be shows that a good description of the experimental data is obtained when density distributions consisting of separate core and halo components are used. The determined {it rms} matter radius is $3.11 pm 0.04 pm 0.13$~fm. In the case of the $^{12}$Be nucleus the results showed an extended matter distribution as well. For this nucleus a matter radius of $2.82 pm 0.03 pm 0.12$~fm was determined. An interesting result is that the free $^{12}$Be nucleus behaves differently from the core of $^{14}$Be and is much more extended than it. The data were also compared with theoretical densities calculated within the FMD and the few-body models. In the case of $^{14}$Be, the calculated cross sections describe the experimental data well while, in the case of $^{12}$Be there are discrepancies in the region of high momentum transfer. Preliminary experimental results for the isotope $^8$B are also presented. An extended matter distribution was obtained (though much more compact as compared to the neutron halos). A proton halo structure was observed for the first time with the proton elastic scattering method. The deduced matter radius is $2.60pm 0.02pm 0.26$~fm. The data were compared with microscopic calculations in the frame of the FMD model and reasonable agreement was observed. The results obtained in the present analysis are in most cases consistent with the previous experimental studies of the same isotopes with different experimental methods (total interaction and reaction cross section measurements, momentum distribution measurements). For future investigation of the structure of exotic nuclei a universal detector system EXL is being developed. It will be installed at the NESR at the future FAIR facility where higher intensity beams of radioactive ions are expected. The usage of storage ring techniques provides high luminosity and low background experimental conditions. Results from the feasibility studies of the EXL detector setup, performed at the present ESR storage ring, are presented.
Resumo:
Analyses of low density lipoprotein receptor-related protein 1 (LRP1) mutant mouse embryonic fibroblasts (MEFs) generated from LRP1 knock-in mice revealed that inefficient maturation and premature proteasomal degradation of immature LRP1 is causing early embryonic lethality in NPxY1 and NPxY1+2 mutant mice. In MEFs, NPxY2 mutant LRP1 showed efficient maturation but, as expected, decreased endocytosis. The single proximal NPxY1 and the double mutant NPxY1+2 were unable to reach the cell surface as an endocytic receptor due to premature degradation. In conclusion, the proximal NPxY1 motif is essential for early sorting steps in the biosynthesis of mature LRP1.rnThe viable NPxY2 mouse was used to provide genetic evidence for LRP1-mediated amyloid-β (Aβ) transport across the blood-brain barrier (BBB). Here, we show that primary mouse brain capillary endothelial cells (pMBCECs) express functionally active LRP1. Moreover, demonstrate that LRP1 mediates [125I]-Aβ1-40 transcytosis across pMBCECs in both directions, whereas no role for LRP1-mediated Aβ degradation was detected. Aβ transport across pMBCECs generated from NPxY2 knock-in mice revealed a reduced Aβ clearance in both directions compared to WT derived pMBCECs. Finally, we conclude that LRP1 is a bona-fide receptor involved in bidirectional transcytosis of Aβ across the BBB.rn
Resumo:
A Swiss-specific FRAX model was developed. Patient profiles at increased probability of fracture beyond currently accepted reimbursement thresholds for bone mineral density (BMD) measurement by dual X-ray absorptiometry (DXA), and osteoporosis treatment were identified.
Resumo:
The generalized failure rate of a continuous random variable has demonstrable importance in operations management. If the valuation distribution of a product has an increasing generalized failure rate (that is, the distribution is IGFR), then the associated revenue function is unimodal, and when the generalized failure rate is strictly increasing, the global maximum is uniquely specified. The assumption that the distribution is IGFR is thus useful and frequently held in recent pricing, revenue, and supply chain management literature. This note contributes to the IGFR literature in several ways. First, it investigates the prevalence of the IGFR property for the left and right truncations of valuation distributions. Second, we extend the IGFR notion to discrete distributions and contrast it with the continuous distribution case. The note also addresses two errors in the previous IGFR literature. Finally, for future reference, we analyze all common (continuous and discrete) distributions for the prevalence of the IGFR property, and derive and tabulate their generalized failure rates.