907 resultados para Strictly positive real systems
Resumo:
Correctly modelling and reasoning with uncertain information from heterogeneous sources in large-scale systems is critical when the reliability is unknown and we still want to derive adequate conclusions. To this end, context-dependent merging strategies have been proposed in the literature. In this paper we investigate how one such context-dependent merging strategy (originally defined for possibility theory), called largely partially maximal consistent subsets (LPMCS), can be adapted to Dempster-Shafer (DS) theory. We identify those measures for the degree of uncertainty and internal conflict that are available in DS theory and show how they can be used for guiding LPMCS merging. A simplified real-world power distribution scenario illustrates our framework. We also briefly discuss how our approach can be incorporated into a multi-agent programming language, thus leading to better plan selection and decision making.
Resumo:
In the present paper, a study on the influence of the alkyl chain length in N-alkyl-triethylammonium bis(trifluoromethylsulfonyl)imide ionic liquids, [NR,222][Tf2N] (R = 6, 8 or 12), on the excess molar enthalpy at 303.15 K and excess molar volume within the temperature interval (283.15–338.15 K) of ionic liquid + methanol mixtures is carried out. Small excess molar volumes with highly asymmetric curves (i.e. S-shape) as a function of mole fraction composition were obtained, with negative values showing in the methanol-rich regions. The excess molar volumes increase with the increase of the alkyl-chain length of the ammonium cation of the ionic liquid and decrease with temperature. The excess enthalpies of selected binary mixtures are positive over the whole composition range and increase slightly with the length of the alkyl side-chain of the cation on the ionic liquid. Both excess properties were subsequently correlated using a Redlich–Kister-type equation, as well as by using the ERAS model. From this semipredictive model the studied excess quantities could be obtained from its chemical and physical contribution. Finally, the COSMOThermX software has been used to evaluate its prediction capability on the excess enthalpy for investigated mixtures at 303.15 K and 0.1 MPa. From this work, it appears that COSMOThermX method predicts this property with good accuracy of approx. 10%, providing at the same time the correct order of magnitude of the partial molar excess enthalpies at infinite dilution for the studied ILs,
<img height="21" border="0" style="vertical-align:bottom" width="33" alt="View the MathML source" title="View the MathML source" src="http://origin-ars.els-cdn.com/content/image/1-s2.0-S0378381213006869-si13.gif">H¯1E,∞, and methanol, <img height="21" border="0" style="vertical-align:bottom" width="33" alt="View the MathML source" title="View the MathML source" src="http://origin-ars.els-cdn.com/content/image/1-s2.0-S0378381213006869-si14.gif">H¯2E,∞.
Resumo:
This paper addresses the problems of effective in situ measurement of the real-time strain for bridge weigh in motion in reinforced concrete bridge structures through the use of optical fiber sensor systems. By undertaking a series of tests, coupled with dynamic loading, the performance of fiber Bragg grating-based sensor systems with various amplification techniques were investigated. In recent years, structural health monitoring (SHM) systems have been developed to monitor bridge deterioration, to assess load levels and hence extend bridge life and safety. Conventional SHM systems, based on measuring strain, can be used to improve knowledge of the bridge's capacity to resist loads but generally give no information on the causes of any increase in stresses. Therefore, it is necessary to find accurate sensors capable of capturing peak strains under dynamic load and suitable methods for attaching these strain sensors to existing and new bridge structures. Additionally, it is important to ensure accurate strain transfer between concrete and steel, adhesives layer, and strain sensor. The results show the benefits in the use of optical fiber networks under these circumstances and their ability to deliver data when conventional sensors cannot capture accurate strains and/or peak strains.
Resumo:
Phenotypic identification of Gram-negative bacteria from respiratory specimens of patients with cystic fibrosis carries a high risk of misidentification. Molecular identification techniques that use single-gene targets are also susceptible to error, including cross-reaction issues with other Gram-negative organisms. In this study, we have designed a Pseudomonas aeruginosa duplex real-time polymerase chain reaction (PCR) (PAduplex) assay targeting the ecfX and the gyrB genes. The PAduplex was evaluated against a panel of 91 clinical and environmental isolates that were presumptively identified as P. aeruginosa. The results were compared with those obtained using a commercial biochemical identification kit and several other P. aeruginosa PCR assays. The results showed that the PAduplex assay is highly suitable for routine identification of P. aeruginosa isolates from clinical or environmental samples. The 2-target format provides simultaneous confirmation of P. aeruginosa identity where both the ecfX and gyrB PCR reactions are positive and may also reduce the potential for false negatives caused by sequence variation in primer or probe targets.
Resumo:
The ability to rapidly detect circulating small RNAs, in particular microRNAs (miRNAs), would further increase their already established potential as biomarkers in a range of conditions. One rate-limiting factor is the time taken to perform quantitative real time PCR amplification. We therefore evaluated the ability of a novel thermal cycler to perform this step in less than 10 minutes. Quantitative PCR was performed on an xxpress® thermal cycler (BJS Biotechnologies, Perivale, UK), which employs a resistive heating system and forced air cooling to achieve thermal ramp rates of 10 °C/s, and a conventional peltier-controlled LightCycler 480 system (Roche, Basel, Switzerland) ramping at 4.8 °C/s. The threshold cycle (Ct) for detection of 18S rDNA from a standard genomic DNA sample was significantly more variable across the block (F-test, p=2.4x10-25) for the xxpress (20.01±0.47SD) than the LightCycler (19.87±0.04SD). RNA was extracted from human plasma, reverse transcribed and a panel of miRNAs amplified and detected using SYBR green (Kapa Biosystems, Wilmington, Ma, USA). The sensitivity of both systems was broadly comparable and both detected a panel of miRNAs reliably and indicated similar relative abundances. The xxpress thermal cycler facilitates rapid qPCR detection of small RNAs and brings point-of care diagnostics based upon circulating miRNAs a step closer to reality.
Resumo:
This paper presents a novel real-time power-device temperature estimation method that monitors the power MOSFET's junction temperature shift arising from thermal aging effects and incorporates the updated electrothermal models of power modules into digital controllers. Currently, the real-time estimator is emerging as an important tool for active control of device junction temperature as well as online health monitoring for power electronic systems, but its thermal model fails to address the device's ongoing degradation. Because of a mismatch of coefficients of thermal expansion between layers of power devices, repetitive thermal cycling will cause cracks, voids, and even delamination within the device components, particularly in the solder and thermal grease layers. Consequently, the thermal resistance of power devices will increase, making it possible to use thermal resistance (and junction temperature) as key indicators for condition monitoring and control purposes. In this paper, the predicted device temperature via threshold voltage measurements is compared with the real-time estimated ones, and the difference is attributed to the aging of the device. The thermal models in digital controllers are frequently updated to correct the shift caused by thermal aging effects. Experimental results on three power MOSFETs confirm that the proposed methodologies are effective to incorporate the thermal aging effects in the power-device temperature estimator with good accuracy. The developed adaptive technologies can be applied to other power devices such as IGBTs and SiC MOSFETs, and have significant economic implications.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
This paper presents a novel hand-held instrument capable of real-time in situ detection and identification of heavy metals. The proposed system provides the facilities found in a traditional lab-based instrument in a hand held a design. In contrast to existing commercial systems, it can stand alone without the need of an associated computer. The electrochemical instrument uses anodic stripping voltammetry which is a precise and sensitive analytical method with excellent limits of detection. The sensors comprise disposable screen-printed (solid working) electrodes rather than the more common hanging mercury drop electrodes. The system is reliable, easy to use, safe, avoids expensive and time-consuming procedures and may be used in a variety of situations to help in the fields of environmental assessment and control.
Resumo:
Game-theoretic security resource allocation problems have generated significant interest in the area of designing and developing security systems. These approaches traditionally utilize the Stackelberg game model for security resource scheduling in order to improve the protection of critical assets. The basic assumption in Stackelberg games is that a defender will act first, then an attacker will choose their best response after observing the defender’s strategy commitment (e.g., protecting a specific asset). Thus, it requires an attacker’s full or partial observation of a defender’s strategy. This assumption is unrealistic in real-time threat recognition and prevention. In this paper, we propose a new solution concept (i.e., a method to predict how a game will be played) for deriving the defender’s optimal strategy based on the principle of acceptable costs of minimax regret. Moreover, we demonstrate the advantages of this solution concept by analyzing its properties.
Resumo:
Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.
Resumo:
his essay is premised on the following: a conspiracy to fix or otherwise manipulate the outcome of a sporting event for profitable purpose. That conspiracy is in turn predicated on the conspirators’ capacity to: (a) ensure that the fix takes place as pre-determined; (b) manipulate the betting markets that surround the sporting event in question; and (c) collect their winnings undetected by either the betting industry’s security systems or the attention of any national regulatory body or law enforcement agency.
Unlike many essays on this topic, this contribution does not focus on the “fix”– part (a) of the above equation. It does not seek to explain how or why a participant or sports official might facilitate a betting scam through either on-field behaviour that manipulates the outcome of a game or by presenting others with privileged inside information in advance of a game. Neither does this contribution seek to give any real insight into the second part of the above equation: how such conspirators manipulate a sports betting market by playing or laying the handicap or in-play or other offered betting odds. In fact, this contribution is not really about the mechanics of sports betting or match fixing at all; rather it is about the sometimes under explained reason why match fixing has reportedly become increasingly attractive as of late to international crime syndicates. That reason relates to the fact that given the traditional liquidity of gambling markets, sports betting can, and has long been, an attractively accessible conduit for criminal syndicates to launder the proceeds of crime. Accordingly, the term “winnings”, noted in part (c) of the above equation, takes on an altogether more nefarious meaning.
This essay’s attempt to review the possible links between match fixing in sport, gambling-related “winnings” and money laundering is presented in four parts.
First, some context will be given to what is meant by money laundering, how it is currently policed internationally and, most importantly, how the growth of online gambling presents a unique set of vulnerabilities and opportunities to launder the proceeds of crime. The globalisation of organised crime, sports betting and transnational financial services now means that money laundering opportunities have moved well beyond a flutter on the horses at your local racetrack or at the roulette table of your nearest casino. The growth of online gambling platforms means that at a click it is possible for the proceeds of crime in one jurisdiction to be placed on a betting market in another jurisdiction with the winnings drawn down and laundered in a third jurisdiction and thus the internationalisation of gambling-related money laundering threatens the integrity of sport globally.
Second, and referring back to the infamous hearings of the US Senate Special Committee to Investigate Organised Crime in Interstate Commerce of the early 1950s, (“the Kefauver Committee”), this article will begin by illustrating the long standing interest of organised crime gangs – in this instance, various Mafia families in the United States – in money laundering via sports gambling-related means.
Third, and using the seminal 2009 report “Money Laundering through the Football Sector” by the Financial Action Task Force (FATF, an inter-governmental body established in 1989 to promote effective implementation of legal, regulatory and operational measures for combating money laundering, terrorist financing and other related threats to the integrity of the international financial system), this essay seeks to assess the vulnerabilities of international sport to match fixing, as motivated in part by the associated secondary criminality of tax evasion and transnational economic crime.
The fourth and concluding parts of the essay spin from problems to possible solutions. The underlying premise here is that heretofore there has been an insularity to the way that sports organisations have both conceptualised and sought to address the match fixing threat e.g., if we (in sport) initiate player education programmes; establish integrity units; enforce codes of conduct and sanctions strictly; then our integrity or brand should be protected. This essay argues that, although these initiatives are important, the source and process of match fixing is beyond sport’s current capacity, as are the possible solutions.
Resumo:
In order to use virtual reality as a sport analysis tool, we need to be sure that an immersed athlete reacts realistically in a virtual environment. This has been validated for a real handball goalkeeper facing a virtual thrower. However, we currently ignore which visual variables induce a realistic motor behavior of the immersed handball goalkeeper. In this study, we used virtual reality to dissociate the visual information related to the movements of the player from the visual information related to the trajectory of the ball. Thus, the aim is to evaluate the relative influence of these different visual information sources on the goalkeeper's motor behavior. We tested 10 handball goalkeepers who had to predict the final position of the virtual ball in the goal when facing the following: only the throwing action of the attacking player (TA condition), only the resulting ball trajectory (BA condition), and both the throwing action of the attacking player and the resulting ball trajectory (TB condition). Here we show that performance was better in the BA and TB conditions, but contrary to expectations, performance was substantially worse in the TA condition. A significant effect of ball landing zone does, however, suggest that the relative importance between visual information from the player and the ball depends on the targeted zone in the goal. In some cases, body-based cues embedded in the throwing actions may have a minor influence on the ball trajectory and vice versa. Kinematics analysis was then combined with these results to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.
Resumo:
In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.
Resumo:
Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.
Resumo:
Pre-processing (PP) of received symbol vector and channel matrices is an essential pre-requisite operation for Sphere Decoder (SD)-based detection of Multiple-Input Multiple-Output (MIMO) wireless systems. PP is a highly complex operation, but relative to the total SD workload it represents a relatively small fraction of the overall computational cost of detecting an OFDM MIMO frame in standards such as 802.11n. Despite this, real-time PP architectures are highly inefficient, dominating the resource cost of real-time SD architectures. This paper resolves this issue. By reorganising the ordering and QR decomposition sub operations of PP, we describe a Field Programmable Gate Array (FPGA)-based PP architecture for the Fixed Complexity Sphere Decoder (FSD) applied to 4 × 4 802.11n MIMO which reduces resource cost by 50% as compared to state-of-the-art solutions whilst maintaining real-time performance.