165 resultados para Server consolidation
Resumo:
Background A reliable standardized diagnosis of pneumonia in children has long been difficult to achieve. Clinical and radiological criteria have been developed by the World Health Organization (WHO), however, their generalizability to different populations is uncertain. We evaluated WHO defined chest radiograph (CXRs) confirmed alveolar pneumonia in the clinical context in Central Australian Aboriginal children, a high risk population, hospitalized with acute lower respiratory illness (ALRI). Methods CXRs in children (aged 1-60 months) hospitalized and treated with intravenous antibiotics for ALRI and enrolled in a randomized controlled trial (RCT) of Vitamin A/Zinc supplementation were matched with data collected during a population-based study of WHO-defined primary endpoint pneumonia (WHO-EPC). These CXRs were reread by a pediatric pulmonologist (PP) and classified as pneumonia-PP when alveolar changes were present. Sensitivities, specificities, positive and negative predictive values (PPV, NPV) for clinical presentations were compared between WHO-EPC and pneumonia-PP. Results Of the 147 episodes of hospitalized ALRI, WHO-EPC was significantly less commonly diagnosed in 40 (27.2%) compared to pneumonia-PP (difference 20.4%, 95% CI 9.6-31.2, P < 0.001). Clinical signs on admission were poor predictors for both pneumonia-PP and WHO-EPC; the sensitivities of clinical signs ranged from a high of 45% for tachypnea to 5% for fever + tachypnea + chest-indrawing. The PPV range was 40-20%, respectively. Higher PPVs were observed against the pediatric pulmonologist's diagnosis compared to WHO-EPC. Conclusions WHO-EPC underestimates alveolar consolidation in a clinical context. Its use in clinical practice or in research designed to inform clinical management in this population should be avoided. Pediatr Pulmonol. 2012; 47:386-392. (C) 2011 Wiley Periodicals, Inc.
Resumo:
Low circulating folate concentrations lead to elevations of plasma homocysteine. Even mild elevations of plasma homocysteine are associated with significantly increased risk of cardiovascular disease (CVD). Available evidence suggests that poor nutrition contributes to excessive premature CVD mortality in Australian Aboriginal people. The aim of the present study was to examine the effect of a nutrition intervention program conducted in an Aboriginal community on plasma homocysteine concentrations in a community-based cohort. From 1989, a health and nutrition project was developed, implemented and evaluated with the people of a remote Aboriginal community. Plasma homocysteine concentrations were measured in a community-based cohort of 14 men and 21 women screened at baseline, 6 months and 12 months. From baseline to 6 months there was a fall in mean plasma homocysteine of over 2|mol/L (P = 0.006) but no further change thereafter (P = 0.433). These changes were associated with a significant increase in red cell folate concentration from baseline to 6 months (P < 0.001) and a further increase from 6 to 12 months (P < 0.001). In multiple regression analysis, change in homocysteine concentration from baseline to 6 months was predicted by change in red cell folate (P = 0.002) and baseline homocysteine (P < 0.001) concentrations, but not by age, gender or baseline red cell folate concentration. We conclude that modest improvements in dietary quality among populations with poor nutrition (and limited disposable income) can lead to reductions in CVD risk.
Resumo:
Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.
Resumo:
The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
When I began production on my autobiographical film Orchids: My Intersex Adventure in 2004, I must admit the ethics of what I was about to do were not something I had consciously considered. Yet they encircled the work I was about to undertake in a myriad of ways...
Resumo:
Body composition of 292 males aged between 18 and 65 years was measured using the deuterium oxide dilution technique. Participants were divided into development (n=146) and cross-validation (n=146) groups. Stature, body weight, skinfold thickness at eight sites, girth at five sites, and bone breadth at four sites were measured and body mass index (BMI), waist-to-hip ratio (WHR), and waist-to-stature ratio (WSR) calculated. Equations were developed using multiple regression analyses with skinfolds, breadth and girth measures, BMI, and other indices as independent variables and percentage body fat (%BF) determined from deuterium dilution technique as the reference. All equations were then tested in the cross-validation group. Results from the reference method were also compared with existing prediction equations by Durnin and Womersley (1974), Davidson et al (2011), and Gurrici et al (1998). The proposed prediction equations were valid in our cross-validation samples with r=0.77- 0.86, bias 0.2-0.5%, and pure error 2.8-3.6%. The strongest was generated from skinfolds with r=0.83, SEE 3.7%, and AIC 377.2. The Durnin and Womersley (1974) and Davidson et al (2011) equations significantly (p<0.001) underestimated %BF by 1.0 and 6.9% respectively, whereas the Gurrici et al (1998) equation significantly (p<0.001) overestimated %BF by 3.3% in our cross-validation samples compared to the reference. Results suggest that the proposed prediction equations are useful in the estimation of %BF in Indonesian men.
Resumo:
Learning and memory depend on signaling mole- cules that affect synaptic efficacy. The cytoskeleton has been implicated in regulating synaptic transmission but its role in learning and memory is poorly understood. Fear learning depends on plasticity in the lateral nucleus of the amygdala. We therefore examined whether the cytoskeletal-regulatory protein, myosin light chain kinase, might contribute to fear learning in the rat lateral amygdala. Microinjection of ML-7, a specific inhibitor of myosin light chain kinase, into the lateral nucleus of the amygdala before fear conditioning, but not immediately afterward, enhanced both short-term memory and long-term memory, suggesting that myosin light chain kinase is involved specifically in memory acquisition rather than in posttraining consolidation of memory. Myosin light chain kinase inhibitor had no effect on memory retrieval. Furthermore, ML-7 had no effect on behavior when the train- ing stimuli were presented in a non-associative manner. An- atomical studies showed that myosin light chain kinase is present in cells throughout lateral nucleus of the amygdala and is localized to dendritic shafts and spines that are postsynaptic to the projections from the auditory thalamus to lateral nucleus of the amygdala, a pathway specifically impli- cated in fear learning. Inhibition of myosin light chain kinase enhanced long-term potentiation, a physiological model of learning, in the auditory thalamic pathway to the lateral nu- cleus of the amygdala. When ML-7 was applied without as- sociative tetanic stimulation it had no effect on synaptic responses in lateral nucleus of the amygdala. Thus, myosin light chain kinase activity in lateral nucleus of the amygdala appears to normally suppress synaptic plasticity in the cir- cuits underlying fear learning, suggesting that myosin light chain kinase may help prevent the acquisition of irrelevant fears. Impairment of this mechanism could contribute to pathological fear learning.
Resumo:
Cloud computing is a currently developing revolution in information technology that is disturbing the way that individuals and corporate entities operate while enabling new distributed services that have not existed before. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Security is often said to be a major concern of users considering migration to cloud computing. This article examines some of these security concerns and surveys recent research efforts in cryptography to provide new technical mechanisms suitable for the new scenarios of cloud computing. We consider techniques such as homomorphic encryption, searchable encryption, proofs of storage, and proofs of location. These techniques allow cloud computing users to benefit from cloud server processing capabilities while keeping their data encrypted; and to check independently the integrity and location of their data. Overall we are interested in how users may be able to maintain and verify their own security without having to rely on the trust of the cloud provider.
Resumo:
We propose a new kind of asymmetric mutual authentication from passwords with stronger privacy against malicious servers, lest they be tempted to engage in “cross-site user impersonation” to each other. It enables a person to authenticate (with) arbitrarily many independent servers, over adversarial channels, using a memorable and reusable single short password. Beside the usual PAKE security guarantees, our framework goes to lengths to secure the password against brute-force cracking from privileged server information.
Resumo:
We revisit the venerable question of access credentials management, which concerns the techniques that we, humans with limited memory, must employ to safeguard our various access keys and tokens in a connected world. Although many existing solutions can be employed to protect a long secret using a short password, those solutions typically require certain assumptions on the distribution of the secret and/or the password, and are helpful against only a subset of the possible attackers. After briefly reviewing a variety of approaches, we propose a user-centric comprehensive model to capture the possible threats posed by online and offline attackers, from the outside and the inside, against the security of both the plaintext and the password. We then propose a few very simple protocols, adapted from the Ford-Kaliski server-assisted password generator and the Boldyreva unique blind signature in particular, that provide the best protection against all kinds of threats, for all distributions of secrets. We also quantify the concrete security of our approach in terms of online and offline password guesses made by outsiders and insiders, in the random-oracle model. The main contribution of this paper lies not in the technical novelty of the proposed solution, but in the identification of the problem and its model. Our results have an immediate and practical application for the real world: they show how to implement single-sign-on stateless roaming authentication for the internet, in a ad-hoc user-driven fashion that requires no change to protocols or infrastructure.
Resumo:
The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.
Resumo:
E-mail spam has remained a scourge and menacing nuisance for users, internet and network service operators and providers, in spite of the anti-spam techniques available; and spammers are relentlessly circumventing these anti-spam techniques embedded or installed in form of software products on both client and server sides of both fixed and mobile devices to their advantage. This continuous evasion degrades the capabilities of these anti-spam techniques as none of them provides a comprehensive reliable solution to the problem posed by spam and spammers. Major problem for instance arises when these anti-spam techniques misjudge or misclassify legitimate emails as spam (false positive); or fail to deliver or block spam on the SMTP server (false negative); and the spam passes-on to the receiver, and yet this server from where it originates does not notice or even have an auto alert service to indicate that the spam it was designed to prevent has slipped and moved on to the receiver’s SMTP server; and the receiver’s SMTP server still fail to stop the spam from reaching user’s device and with no auto alert mechanism to inform itself of this inability; thus causing a staggering cost in loss of time, effort and finance. This paper takes a comparative literature overview of some of these anti-spam techniques, especially the filtering technological endorsements designed to prevent spam, their merits and demerits to entrench their capability enhancements, as well as evaluative analytical recommendations that will be subject to further research.
Resumo:
Recently a new human authentication scheme called PAS (predicate-based authentication service) was proposed, which does not require the assistance of any supplementary device. The main security claim of PAS is to resist passive adversaries who can observe the whole authentication session between the human user and the remote server. In this paper we show that PAS is insecure against both brute force attack and a probabilistic attack. In particular, we show that its security against brute force attack was strongly overestimated. Furthermore, we introduce a probabilistic attack, which can break part of the password even with a very small number of observed authentication sessions. Although the proposed attack cannot completely break the password, it can downgrade the PAS system to a much weaker system similar to common OTP (one-time password) systems.
Resumo:
We consider the following problem: members in a dynamic group retrieve their encrypted data from an untrusted server based on keywords and without any loss of data confidentiality and member’s privacy. In this paper, we investigate common secure indices for conjunctive keyword-based retrieval over encrypted data, and construct an efficient scheme from Wang et al. dynamic accumulator, Nyberg combinatorial accumulator and Kiayias et al. public-key encryption system. The proposed scheme is trapdoorless and keyword-field free. The security is proved under the random oracle, decisional composite residuosity and extended strong RSA assumptions.