935 resultados para Multiple Organ Failure
Resumo:
This paper presents a novel framework to further advance the recent trend of using query decomposition and high-order term relationships in query language modeling, which takes into account terms implicitly associated with different subsets of query terms. Existing approaches, most remarkably the language model based on the Information Flow method are however unable to capture multiple levels of associations and also suffer from a high computational overhead. In this paper, we propose to compute association rules from pseudo feedback documents that are segmented into variable length chunks via multiple sliding windows of different sizes. Extensive experiments have been conducted on various TREC collections and our approach significantly outperforms a baseline Query Likelihood language model, the Relevance Model and the Information Flow model.
Resumo:
Voltage rise is the main issue which limits the capacity of Low Voltage (LV) network to accommodate more Renewable Energy (RE) sources. In addition, voltage drop at peak load period is a significant power quality concern. This paper proposes a new robust voltage support strategy based on distributed coordination of multiple distribution static synchronous compensators (DSTATCOMs). The study focuses on LV networks with PV as the RE source for customers. The proposed approach applied to a typical LV network and its advantages are shown comparing with other voltage control strategies.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.
Resumo:
This paper presents a method to enable a mobile robot working in non-stationary environments to plan its path and localize within multiple map hypotheses simultaneously. The maps are generated using a long-term and short-term memory mechanism that ensures only persistent configurations in the environment are selected to create the maps. In order to evaluate the proposed method, experimentation is conducted in an office environment. Compared to navigation systems that use only one map, our system produces superior path planning and navigation in a non-stationary environment where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners.
Resumo:
Railhead is perhaps the highest stressed civil infrastructure due to the passage of heavily loaded wheels through a very small contact patch. The stresses at the contact patch cause yielding of the railhead material and wear. Many theories exist for the prediction of these mechanisms of continuous rails; this process in the discontinuous rails is relatively sparingly researched. Discontinuous railhead edges fail due to accumulating excessive plastic strains. Significant safety concern is widely reported as these edges form part of Insulated Rail Joints (IRJs) in the signalling track circuitry. Since Hertzian contact is not valid at a discontinuous edge, 3D finite element (3DFE) models of wheel contact at a railhead edge have been used in this research. Elastic–plastic material properties of the head hardened rail steel have been experimentally determined through uniaxial monotonic tension tests and incorporated into a FE model of a cylindrical specimen subject to cyclic tension load- ing. The parameters required for the Chaboche kinematic hardening model have been determined from the stabilised hysteresis loops of the cyclic load simulation and imple- mented into the 3DFE model. The 3DFE predictions of the plastic strain accumulation in the vicinity of the wheel contact at discontinuous railhead edges are shown to be affected by the contact due to passage of wheels rather than the magnitude of the loads the wheels carry. Therefore to eliminate this failure mechanism, modification to the contact patch is essential; reduction in wheel load cannot solve this problem.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.
Resumo:
This paper evaluates and proposes various compensation methods for three-level Z-source inverters under semiconductor-failure conditions. Unlike the fault-tolerant techniques used in traditional three-level inverters, where either an extra phase-leg or collective switching states are used, the proposed methods for three-level Z-source inverters simply reconfigure their relevant gating signals so as to ride-through the failed semiconductor conditions smoothly without any significant decrease in their ac-output quality and amplitude. These features are partly attributed to the inherent boost characteristics of a Z-source inverter, in addition to its usual voltage-buck operation. By focusing on specific types of three-level Z-source inverters, it can also be shown that, for the dual Z-source inverters, a unique feature accompanying it is its extra ability to force common-mode voltage to zero even under semiconductor-failure conditions. For verifying these described performance features, PLECS simulation and experimental testing were performed with some results captured and shown in a later section for visual confirmation.
Resumo:
NLS is a stream cipher which was submitted to the eSTREAM project. A linear distinguishing attack against NLS was presented by Cho and Pieprzyk, which was called Crossword Puzzle (CP) attack. NLSv2 is a tweak version of NLS which aims mainly at avoiding the CP attack. In this paper, a new distinguishing attack against NLSv2 is presented. The attack exploits high correlation amongst neighboring bits of the cipher. The paper first shows that the modular addition preserves pairwise correlations as demonstrated by existence of linear approximations with large biases. Next, it shows how to combine these results with the existence of high correlation between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around 2^−37. Consequently, we claim that NLSv2 is distinguishable from a random cipher after observing around 2^74 keystream words.
Resumo:
We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.
Resumo:
An increasing range of services are now offered via online applications and e-commerce websites. However, problems with online services still occur at times, even for the best service providers due to the technical failures, informational failures, or lack of required website functionalities. Also, the widespread and increasing implementation of web services means that service failures are both more likely to occur, and more likely to have serious consequences. In this paper we first develop a digital service value chain framework based on existing service delivery models adapted for digital services. We then review current literature on service failure prevention, and provide a typology of technolo- gies and approaches that can be used to prevent failures of different types (functional, informational, system), that can occur at different stages in the web service delivery. This makes a contribution to theory by relating specific technologies and technological approaches to the point in the value chain framework where they will have the maximum impact. Our typology can also be used to guide the planning, justification and design of robust, reliable web services.
Resumo:
The ubiquitin-proteasome system targets many cellular proteins for degradation and thereby controls most cellular processes. Although it is well established that proteasome inhibition is lethal, the underlying mechanism is unknown. Here, we show that proteasome inhibition results in a lethal amino acid shortage. In yeast, mammalian cells, and flies, the deleterious consequences of proteasome inhibition are rescued by amino acid supplementation. In all three systems, this rescuing effect occurs without noticeable changes in the levels of proteasome substrates. In mammalian cells, the amino acid scarcity resulting from proteasome inhibition is the signal that causes induction of both the integrated stress response and autophagy, in an unsuccessful attempt to replenish the pool of intracellular amino acids. These results reveal that cells can tolerate protein waste, but not the amino acid scarcity resulting from proteasome inhibition.
Resumo:
The value of information technology (IT) is often realized when continuously being used after users’ initial acceptance. However, previous research on continuing IT usage is limited for dismissing the importance of mental goals in directing users’ behaviors and for inadequately accommodating the group context of users. This in-progress paper offers a synthesis of several literature to conceptualize continuing IT usage as multilevel constructs and to view IT usage behavior as directed and energized by a set of mental goals. Drawing from the self-regulation theory in the social psychology, this paper proposes a process model, positioning continuing IT usage as multiple-goal pursuit. An agent-based modeling approach is suggested to further explore causal and analytical implications of the proposed process model.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.
Resumo:
Background Artemisinin-combination therapy is a highly effective treatment for uncomplicated falciparum malaria but parasite recrudescence has been commonly reported following artemisinin (ART) monotherapy. The dormancy recovery hypothesis has been proposed to explain this phenomenon, which is different from the slower parasite clearance times reported as the first evidence of the development of ART resistance. Methods In this study, an existing P. falciparum infection model is modified to incorporate the hypothesis of dormancy. Published in vitro data describing the characteristics of dormant parasites is used to explore whether dormancy alone could be responsible for the high recrudescence rates observed in field studies using monotherapy. Several treatment regimens and dormancy rates were simulated to investigate the rate of clinical and parasitological failure following treatment. Results The model output indicates that following a single treatment with ART parasitological and clinical failures occur in up to 77% and 67% of simulations, respectively. These rates rapidly decline with repeated treatment and are sensitive to the assumed dormancy rate. The simulated parasitological and clinical treatment failure rates after 3 and 7 days of treatment are comparable to those reported from several field trials. Conclusions Although further studies are required to confirm dormancy in vivo, this theoretical study adds support for the hypothesis, highlighting the potential role of this parasite sub-population in treatment failure following monotherapy and reinforcing the importance of using ART in combination with other anti-malarials.