81 resultados para Meyer–Konig and Zeller Operators
Resumo:
Aims
Our aim was to test the prediction and clinical applicability of high-sensitivity assayed troponin I for incident cardiovascular events in a general middle-aged European population.
Methods and results
High-sensitivity assayed troponin I was measured in the Scottish Heart Health Extended Cohort (n = 15 340) with 2171 cardiovascular events (including acute coronary heart disease and probable ischaemic strokes), 714 coronary deaths (25% of all deaths), 1980 myocardial infarctions, and 797 strokes of all kinds during an average of 20 years follow-up. Detection rate above the limit of detection (LoD) was 74.8% in the overall population and 82.6% in men and 67.0% in women. Troponin I assayed by the high-sensitivity method was associated with future cardiovascular risk after full adjustment such as that individuals in the fourth category had 2.5 times the risk compared with those without detectable troponin I (P < 0.0001). These associations remained significant even for those individuals in whom levels of contemporary-sensitivity troponin I measures were not detectable. Addition of troponin I levels to clinical variables led to significant increases in risk prediction with significant improvement of the c-statistic (P < 0.0001) and net reclassification (P < 0.0001). A threshold of 4.7 pg/mL in women and 7.0 pg/mL in men is suggested to detect individuals at high risk for future cardiovascular events.
Conclusion
Troponin I, measured with a high-sensitivity assay, is an independent predictor of cardiovascular events and might support selection of at risk individuals.
Resumo:
A lateral flow immunoassay (LFIA) has been developed and fully validated to detect the primary amnesic shellfish poisoning (ASP) toxin, domoic acid (DA). The performance characteristics of two versions of the test were investigated using spiked and naturally contaminated shellfish (mussels, scallops, oysters, clams, and cockles). The tests provide a qualitative result, to indicate the absence or presence of DA in extracts of shellfish tissues, at concentrations that are relevant to regulatory limits. The new rapid assay (LFIA version 2) was designed to overcome the performance limitations identified in the first version of the assay. The improved test uses an electronic reader to remove the subjective nature of the generated results, and the positive cut-off for screening of DA in shellfish was increased from 10 ppm (version 1) to 17.5 ppm (version 2). A simple extraction and test procedure was employed, which required minimal equipment and materials; results were available 15 min after sample preparation. Stability of the aqueous extracts at room temperature (22 C) at four time points (up to 245 min after extraction) and across a range of DA concentrations was 100.3±1.3% and 98.8±2.4% for pre- and post-buffered extracts, respectively. The assay can be used both within laboratory settings and in remote locations. The accuracy of the new assay, to indicate negative results at or below 10 ppm DA, and positive results at or above 17.5 ppm, was 99.5% (n=216 tests). Validation data were obtained from a 2-day, randomised, blind study consisting of multiple LFIA lots (n=3), readers (n=3) and operators (n=3), carrying out multiple extractions of mussel tissue (n=3) at each concentration (0, 10, 17.5, and 20 ppm). No matrix effects were observed on the performance of the assay with different species (mussels, scallops, oysters, clams, and cockles). There was no impact on accuracy or interference from other phycotoxins, glutamic acid or glutamine with various strip incubations (8, 10, and 12 min). The accuracy of the assay, using naturally contaminated samples to indicate negative results at or below 12.5 ppm and positive results at or above 17.5 ppm, was 100%. Variability between three LFIA lots across a range of DA concentrations, expressed as coefficient of variation (% CV), was 1.1±0.4% (n=2 days) based on quantitative readings from the electronic reader. During an 8 week stability study, accuracy of the method with test strips stored at various temperatures (6, 22, 37 and 50 C) was 100%. Validation for both versions included comparisons with results obtained using reference LC-UV methods. © 2013 Elsevier B.V.
Resumo:
We present an ab initio real-time-based computational approach to study nonlinear optical properties in condensed matter systems that is especially suitable for crystalline solids and periodic nanostructures. The equations of motion and the coupling of the electrons with the external electric field are derived from the Berry-phase formulation of the dynamical polarization [Souza et al., Phys. Rev. B 69, 085106 (2004)]. Many-body effects are introduced by adding single-particle operators to the independent-particle Hamiltonian. We add a Hartree operator to account for crystal local effects and a scissor operator to correct the independent particle band structure for quasiparticle effects. We also discuss the possibility of accurately treating excitonic effects by adding a screened Hartree-Fock self-energy operator. The approach is validated by calculating the second-harmonic generation of SiC and AlAs bulk semiconductors: an excellent agreement is obtained with existing ab initio calculations from response theory in frequency domain [Luppi et al., Phys. Rev. B 82, 235201 (2010)]. We finally show applications to the second-harmonic generation of CdTe and the third-harmonic generation of Si.
Resumo:
While waste is increasingly viewed as a resource to be globally traded, increased regulatory control on waste across Europe has created the conditions where waste crime now operates alongside a legitimate waste sector. Waste crime,is an environmental crime and a form of white-collar crime, which exploits the physical characteristics of waste, the complexity of the collection and downstream infrastructure, and the market opportunities for profit. This paper highlights some of the factors which make the waste sector vulnerable to waste crime. These factors include new legislation and its weak regulatory enforcement, the economics of waste treatment, where legal and safe treatment of waste can be more expensive than illegal operations, the complexity of the waste sector and the different actors who can have some involvement, directly or indirectly, in the movement of illegal wastes, and finally that waste can be hidden or disguised and creates an opportunity for illegal businesses to operate alongside legitimate waste operators. The study also considers waste crime from the perspective of particular waste streams that are often associated with illegal shipment or through illegal treatment and disposal. For each, the nature of the crime which occurs is shown to differ, but for each, vulnerabilities to waste crime are evident. The paper also describes some approaches which can be adopted by regulators and those involved in developing new legislation for identifying where opportunities for waste crime occurs and how to prevent it.
Resumo:
We undertake a detailed study of the sets of multiplicity in a second countable locally compact group G and their operator versions. We establish a symbolic calculus for normal completely bounded maps from the space B(L-2(G)) of bounded linear operators on L-2 (G) into the von Neumann algebra VN(G) of G and use it to show that a closed subset E subset of G is a set of multiplicity if and only if the set E* = {(s,t) is an element of G x G : ts(-1) is an element of E} is a set of operator multiplicity. Analogous results are established for M-1-sets and M-0-sets. We show that the property of being a set of multiplicity is preserved under various operations, including taking direct products, and establish an Inverse Image Theorem for such sets. We characterise the sets of finite width that are also sets of operator multiplicity, and show that every compact operator supported on a set of finite width can be approximated by sums of rank one operators supported on the same set. We show that, if G satisfies a mild approximation condition, pointwise multiplication by a given measurable function psi : G -> C defines a closable multiplier on the reduced C*-algebra G(r)*(G) of G if and only if Schur multiplication by the function N(psi): G x G -> C, given by N(psi)(s, t) = psi(ts(-1)), is a closable operator when viewed as a densely defined linear map on the space of compact operators on L-2(G). Similar results are obtained for multipliers on VN(C).
Resumo:
We establish an unbounded version of Stinespring's Theorem and a lifting result for Stinespring representations of completely positive modular maps defined on the space of all compact operators. We apply these results to study positivity for Schur multipliers. We characterise positive local Schur multipliers, and provide a description of positive local Schur multipliers of Toeplitz type. We introduce local operator multipliers as a non-commutative analogue of local Schur multipliers, and characterise them extending both the characterisation of operator multipliers from [16] and that of local Schur multipliers from [27]. We provide a description of the positive local operator multipliers in terms of approximation by elements of canonical positive cones.
Energy-Aware Rate and Description Allocation Optimized Video Streaming for Mobile D2D Communications
Resumo:
The proliferation problem of video streaming applications and mobile devices has prompted wireless network operators to put more efforts into improving quality of experience (QoE) while saving resources that are needed for high transmission rate and large size of video streaming. To deal with this problem, we propose an energy-aware rate and description allocation optimization method for video streaming in cellular network assisted device-to-device (D2D) communications. In particular, we allocate the optimal bit rate to each layer of video segments and packetize the segments into multiple descriptions with embedded forward error correction (FEC) for realtime streaming without retransmission. Simultaneously, the optimal number of descriptions is allocated to each D2D helper for transmission. The two allocation processes are done according to the access rate of segments, channel state information (CSI) of D2D requester, and remaining energy of helpers, to gain the highest optimization performance. Simulation results demonstrate that our proposed method (named OPT) significantly enhances the performance of video streaming in terms of high QoE and energy saving.
Resumo:
Ready-to-eat (RTE) foods can be readily consumed with minimum or without any further preparation; their processing is complex—involving thorough decontamination processes— due to their composition of mixed ingredients. Compared with conventional preservation technologies, novel processing technologies can enhance the safety and quality of these complex products by reducing the risk of pathogens and/ or by preserving related health-promoting compounds. These novel technologies can be divided into two categories: thermal and non-thermal. As a non-thermal treatment, High Pressure Processing is a very promising novel methodology that can be used even in the already packaged RTE foods. A new “volumetric” microwave heating technology is an interesting cooking and decontamination method directly applied to foods. Cold Plasma technology is a potential substitute of chlorine washing in fresh vegetable decontamination. Ohmic heating is a heating method applicable to viscous products but also to meat products. Producers of RTE foods have to deal with challenging decisions starting from the ingredients suppliers to the distribution chain. They have to take into account not only the cost factor but also the benefits and food products’ safety and quality. Novel processing technologies can be a valuable yet large investment for several SME food manufacturers, but they need support data to be able to make adequate decisions. Within the FP7 Cooperation funded by the European Commission, the STARTEC project aims to develop an IT decision supporting tool to help food business operators in their risk assessment and future decision making when producing RTE foods with or without novel preservation technologies.
Resumo:
Belief merging operators combine multiple belief bases (a profile) into a collective one. When the conjunction of belief bases is consistent, all the operators agree on the result. However, if the conjunction of belief bases is inconsistent, the results vary between operators. There is no formal manner to measure the results and decide on which operator to select. So, in this paper we propose to evaluate the result of merging operators by using three ordering relations (fairness, satisfaction and strength) over operators for a given profile. Moreover, a relation of conformity over operators is introduced in order to classify how well the operator conforms to the definition of a merging operator. By using the four proposed relations we provide a comparison of some classical merging operators and evaluate the results for some specific profiles.
Resumo:
We show that if E is an atomic Banach lattice with an ordercontinuous norm, A, B ∈ Lr(E) and MA,B is the operator on Lr(E) defined by MA,B(T) = AT B then ||MA,B||r = ||A||r||B||r but that there is no real α > 0 such that ||MA,B || ≥ α ||A||r||B ||r.
Resumo:
We describe all two dimensional unital Riesz algebras and study representations of them in Riesz algebras of regular operators. Although our results are not complete, we do demonstrate that very varied behaviour can occur even though all these algebras can be given a Banach lattice algebra norm.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
In this paper, our previous work on Principal Component Analysis (PCA) based fault detection method is extended to the dynamic monitoring and detection of loss-of-main in power systems using wide-area synchrophasor measurements. In the previous work, a static PCA model was built and verified to be capable of detecting and extracting system faulty events; however the false alarm rate is high. To address this problem, this paper uses a well-known ‘time lag shift’ method to include dynamic behavior of the PCA model based on the synchronized measurements from Phasor Measurement Units (PMU), which is named as the Dynamic Principal Component Analysis (DPCA). Compared with the static PCA approach as well as the traditional passive mechanisms of loss-of-main detection, the proposed DPCA procedure describes how the synchrophasors are linearly
auto- and cross-correlated, based on conducting the singular value decomposition on the augmented time lagged synchrophasor matrix. Similar to the static PCA method, two statistics, namely T2 and Q with confidence limits are calculated to form intuitive charts for engineers or operators to monitor the loss-of-main situation in real time. The effectiveness of the proposed methodology is evaluated on the loss-of-main monitoring of a real system, where the historic data are recorded from PMUs installed in several locations in the UK/Ireland power system.