850 resultados para Multi-Higgs Models
Resumo:
Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.
Resumo:
Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.
Resumo:
The objective of this doctoral research is to investigate the internal frost damage due to crystallization pore pressure in porous cement-based materials by developing computational and experimental characterization tools. As an essential component of the U.S. infrastructure system, the durability of concrete has significant impact on maintenance costs. In cold climates, freeze-thaw damage is a major issue affecting the durability of concrete. The deleterious effects of the freeze-thaw cycle depend on the microscale characteristics of concrete such as the pore sizes and the pore distribution, as well as the environmental conditions. Recent theories attribute internal frost damage of concrete is caused by crystallization pore pressure in the cold environment. The pore structures have significant impact on freeze-thaw durability of cement/concrete samples. The scanning electron microscope (SEM) and transmission X-ray microscopy (TXM) techniques were applied to characterize freeze-thaw damage within pore structure. In the microscale pore system, the crystallization pressures at sub-cooling temperatures were calculated using interface energy balance with thermodynamic analysis. The multi-phase Extended Finite Element Modeling (XFEM) and bilinear Cohesive Zone Modeling (CZM) were developed to simulate the internal frost damage of heterogeneous cement-based material samples. The fracture simulation with these two techniques were validated by comparing the predicted fracture behavior with the captured damage from compact tension (CT) and single-edge notched beam (SEB) bending tests. The study applied the developed computational tools to simulate the internal frost damage caused by ice crystallization with the two dimensional (2-D) SEM and three dimensional (3-D) reconstructed SEM and TXM digital samples. The pore pressure calculated from thermodynamic analysis was input for model simulation. The 2-D and 3-D bilinear CZM predicted the crack initiation and propagation within cement paste microstructure. The favorably predicted crack paths in concrete/cement samples indicate the developed bilinear CZM techniques have the ability to capture crack nucleation and propagation in cement-based material samples with multiphase and associated interface. By comparing the computational prediction with the actual damaged samples, it also indicates that the ice crystallization pressure is the main mechanism for the internal frost damage in cementitious materials.
Resumo:
Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.
Resumo:
This voluminous book which draws on almost 1000 references provides an important theoretical base for practice. After an informative introduction about models, maps and metaphors, Forte provides an impressive presentation of several perspectives for use in practice; applied ecological theory, applied system theory, applied biology, applied cognitive science, applied psychodynamic theory, applied behaviourism, applied symbolic interactionism, applied social role theory, applied economic theory, and applied critical theory. Finally he completes his book with a chapter on “Multi theory practice and routes to integration.”
Resumo:
Understanding suicide bombing entails studying the phenomenon on three different dimensions: the suicide bomber, the terrorist organization, and the community from which suicide bombings emerge. Political and social psychology allow us to establish the reciprocal relationships that underpin the exchanges between the three dimensions. This method increases our theoretical understanding of suicide bombing by moving away from the unidimensional models that have previously dominated the terrorism literature.
Resumo:
INTRODUCTION Low systolic blood pressure (SBP) is an important secondary insult following traumatic brain injury (TBI), but its exact relationship with outcome is not well characterised. Although a SBP of <90mmHg represents the threshold for hypotension in consensus TBI treatment guidelines, recent studies suggest redefining hypotension at higher levels. This study therefore aimed to fully characterise the association between admission SBP and mortality to further inform resuscitation endpoints. METHODS We conducted a multicentre cohort study using data from the largest European trauma registry. Consecutive adult patients with AIS head scores >2 admitted directly to specialist neuroscience centres between 2005 and July 2012 were studied. Multilevel logistic regression models were developed to examine the association between admission SBP and 30 day inpatient mortality. Models were adjusted for confounders including age, severity of injury, and to account for differential quality of hospital care. RESULTS 5057 patients were included in complete case analyses. Admission SBP demonstrated a smooth u-shaped association with outcome in a bivariate analysis, with increasing mortality at both lower and higher values, and no evidence of any threshold effect. Adjusting for confounding slightly attenuated the association between mortality and SBP at levels <120mmHg, and abolished the relationship for higher SBP values. Case-mix adjusted odds of death were 1.5 times greater at <120mmHg, doubled at <100mmHg, tripled at <90mmHg, and six times greater at SBP<70mmHg, p<0.01. CONCLUSIONS These findings indicate that TBI studies should model SBP as a continuous variable and may suggest that current TBI treatment guidelines, using a cut-off for hypotension at SBP<90mmHg, should be reconsidered.
Resumo:
This Report summarizes the results of the activities in 2012 and the first half of 2013 of the LHC Higgs Cross Section Working Group. The main goal of the working group was to present the state of the art of Higgs Physics at the LHC, integrating all new results that have appeared in the last few years. This report follows the first working group report Handbook of LHC Higgs Cross Sections: 1. Inclusive Observables (CERN-2011-002) and the second working group report Handbook of LHC Higgs Cross Sections: 2. Differential Distributions (CERN-2012-002). After the discovery of a Higgs boson at the LHC in mid-2012 this report focuses on refined prediction of Standard Model (SM) Higgs phenomenology around the experimentally observed value of 125-126 GeV, refined predictions for heavy SM-like Higgs bosons as well as predictions in the Minimal Supersymmetric Standard Model and first steps to go beyond these models. The other main focus is on the extraction of the characteristics and properties of the newly discovered particle such as couplings to SM particles, spin and CP-quantum numbers etc.
Resumo:
The early phase of psychotherapy has been regarded as a sensitive period in the unfolding of psychotherapy leading to positive outcomes. However, there is disagreement about the degree to which early (especially relationship-related) session experiences predict outcome over and above initial levels of distress and early response to treatment. The goal of the present study was to simultaneously examine outcome at post treatment as a function of (a) intake symptom and interpersonal distress as well as early change in well-being and symptoms, (b) the patient's early session-experiences, (c) the therapist's early session-experiences/interventions, and (d) their interactions. The data of 430 psychotherapy completers treated by 151 therapists were analyzed using hierarchical linear models. Results indicate that early positive intra- and interpersonal session experiences as reported by patients and therapists after the sessions explained 58% of variance of a composite outcome measure, taking intake distress and early response into account. All predictors (other than problem-activating therapists' interventions) contributed to later treatment outcomes if entered as single predictors. However, the multi-predictor analyses indicated that interpersonal distress at intake as well as the early interpersonal session experiences by patients and therapists remained robust predictors of outcome. The findings underscore that early in therapy therapists (and their supervisors) need to understand and monitor multiple interconnected components simultaneously
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Extremes of electrocardiographic QT interval are associated with increased risk for sudden cardiac death (SCD); thus, identification and characterization of genetic variants that modulate QT interval may elucidate the underlying etiology of SCD. Previous studies have revealed an association between a common genetic variant in NOS1AP and QT interval in populations of European ancestry, but this finding has not been extended to other ethnic populations. We sought to characterize the effects of NOS1AP genetic variants on QT interval in the multi-ethnic population-based Dallas Heart Study (DHS, n = 3,072). The SNP most strongly associated with QT interval in previous samples of European ancestry, rs16847548, was the most strongly associated in White (P = 0.005) and Black (P = 3.6 x 10(-5)) participants, with the same direction of effect in Hispanics (P = 0.17), and further showed a significant SNP x sex-interaction (P = 0.03). A second SNP, rs16856785, uncorrelated with rs16847548, was also associated with QT interval in Blacks (P = 0.01), with qualitatively similar results in Whites and Hispanics. In a previously genotyped cohort of 14,107 White individuals drawn from the combined Atherosclerotic Risk in Communities (ARIC) and Cardiovascular Health Study (CHS) cohorts, we validated both the second locus at rs16856785 (P = 7.63 x 10(-8)), as well as the sex-interaction with rs16847548 (P = 8.68 x 10(-6)). These data extend the association of genetic variants in NOS1AP with QT interval to a Black population, with similar trends, though not statistically significant at P<0.05, in Hispanics. In addition, we identify a strong sex-interaction and the presence of a second independent site within NOS1AP associated with the QT interval. These results highlight the consistent and complex role of NOS1AP genetic variants in modulating QT interval.
Resumo:
Decadal-to-century scale trends for a range of marine environmental variables in the upper mesopelagic layer (UML, 100–600 m) are investigated using results from seven Earth System Models forced by a high greenhouse gas emission scenario. The models as a class represent the observation-based distribution of oxygen (O2) and carbon dioxide (CO2), albeit major mismatches between observation-based and simulated values remain for individual models. By year 2100 all models project an increase in SST between 2 °C and 3 °C, and a decrease in the pH and in the saturation state of water with respect to calcium carbonate minerals in the UML. A decrease in the total ocean inventory of dissolved oxygen by 2% to 4% is projected by the range of models. Projected O2 changes in the UML show a complex pattern with both increasing and decreasing trends reflecting the subtle balance of different competing factors such as circulation, production, remineralization, and temperature changes. Projected changes in the total volume of hypoxic and suboxic waters remain relatively small in all models. A widespread increase of CO2 in the UML is projected. The median of the CO2 distribution between 100 and 600m shifts from 0.1–0.2 mol m−3 in year 1990 to 0.2–0.4 mol m−3 in year 2100, primarily as a result of the invasion of anthropogenic carbon from the atmosphere. The co-occurrence of changes in a range of environmental variables indicates the need to further investigate their synergistic impacts on marine ecosystems and Earth System feedbacks.
Resumo:
The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.
Resumo:
PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.
Resumo:
Current models of embryological development focus on intracellular processes such as gene expression and protein networks, rather than on the complex relationship between subcellular processes and the collective cellular organization these processes support. We have explored this collective behavior in the context of neocortical development, by modeling the expansion of a small number of progenitor cells into a laminated cortex with layer and cell type specific projections. The developmental process is steered by a formal language analogous to genomic instructions, and takes place in a physically realistic three-dimensional environment. A common genome inserted into individual cells control their individual behaviors, and thereby gives rise to collective developmental sequences in a biologically plausible manner. The simulation begins with a single progenitor cell containing the artificial genome. This progenitor then gives rise through a lineage of offspring to distinct populations of neuronal precursors that migrate to form the cortical laminae. The precursors differentiate by extending dendrites and axons, which reproduce the experimentally determined branching patterns of a number of different neuronal cell types observed in the cat visual cortex. This result is the first comprehensive demonstration of the principles of self-construction whereby the cortical architecture develops. In addition, our model makes several testable predictions concerning cell migration and branching mechanisms.