999 resultados para 13627-012
Resumo:
Ozone-induced dissociation (OzID) is an alternative ion activation method that relies on the gas phase ion-molecule reaction between a mass-selected target ion and ozone in an ion trap mass spectrometer. Herein, we evaluated the performance of OzID for both the structural elucidation and selective detection of conjugated carbon-carbon double bond motifs within lipids. The relative reactivity trends for \[M + X](+) ions (where X = Li, Na, K) formed via electrospray ionization (ESI) of conjugated versus nonconjugated fatty acid methyl esters (FAMEs) were examined using two different OzID-enabled linear ion-trap mass spectrometers. Compared with nonconjugated analogues, FAMEs derived from conjugated linoleic acids were found to react up to 200 times faster and to yield characteristic radical cations. The significantly enhanced reactivity of conjugated isomers means that OzID product ions can be observed without invoking a reaction delay in the experimental sequence (i.e., trapping of ions in the presence of ozone is not required). This possibility has been exploited to undertake neutral-loss scans on a triple quadrupole mass spectrometer targeting characteristic OzID transitions. Such analyses reveal the presence of conjugated double bonds in lipids extracted from selected foodstuffs. Finally, by benchmarking of the absolute ozone concentration inside the ion trap, second order rate constants for the gas phase reactions between unsaturated organic ions and ozone were obtained. These results demonstrate a significant influence of the adducting metal on reaction rate constants in the fashion Li > Na > K.
Resumo:
An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.
Resumo:
Recently, a convex hull-based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. While some rudimentary security issues of this protocol have been discussed, a comprehensive security analysis has been lacking. In this paper, we analyze the security of this convex hull-based protocol. In particular, we show two probabilistic attacks that reveal the user’s secret after the observation of only a handful of authentication sessions. These attacks can be efficiently implemented as their time and space complexities are considerably less than brute force attack. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values that cross the threshold of usability.
Resumo:
WG-7 is a stream cipher based on WG stream cipher and has been designed by Luo et al. (2010). This cipher is designed for low cost and lightweight applications (RFID tags and mobile phones, for instance). This paper addresses cryptographic weaknesses of WG-7 stream cipher. We show that the key stream generated by WG-7 can be distinguished from a random sequence after knowing 213.5 keystream bits and with a negligible error probability. Also, we investigate the security of WG-7 against algebraic attacks. An algebraic key recovery attack on this cipher is proposed. The attack allows to recover both the internal state and the secret key with the time complexity about 2/27.
Resumo:
Visual information is central to several of the scientific disciplines. This paper studies how scientists working in a multidisciplinary field produce scientific evidence through building and manipulating scientific visualizations. Using ethnographic methods, we studied visualization practices of eight scientists working in the domain of tissue engineering research. Tissue engineering is an upcoming field of research that deals with replacing or regenerating human cells, tissues, or organs to restore or establish normal function. We spent 3 months in the field, where we recorded laboratory sessions of these scientists and used semi-structured interviews to get an insight into their visualization practices. From our results, we elicit two themes characterizing their visualization practices: multiplicity and physicality. In this article, we provide several examples of scientists’ visualization practices to describe these two themes and show that multimodality of such practices plays an important role in scientific visualization.
Resumo:
For the purpose of developing collaborative support in design studio environments, we have carried out ethnographic fieldwork in professional and academic product design studios. Our intention was to understand design practices beyond the productivity point of view and take into account the experiential, inspirational and aesthetical aspects of design practices. Using examples from our fieldwork, we develop our results around three broad themes by which design professionals support communication and collaboration: (1) use of artefacts, (2) use of space and (3) designerly practices. We use the results of our fieldwork for drawing implications for designing technologies for the design studio culture.
Resumo:
Landscape change is an ongoing process even within established urban landscapes. Yet, analyses of fragmentation and deforestation have focused primarily on the conversion of non-urban to urban landscapes in rural landscapes and ignored urban landscapes. To determine the ecological effects of continued urbanization in urban landscapes, tree-covered patches were mapped in the Gwynns Falls watershed (17158.6 ha) in Maryland for 1994 and 1999 to document fragmentation, deforestation, and reforestation. The watershed was divided into lower (urban core), middle (older suburbs), and upper (recent suburbs) subsections. Over the entire watershed a net of 264.5 of 4855.5 ha of tree-covered patches were converted to urban land use-125 new tree-covered patches were added through fragmentation, 4 were added through reforestation, 43 were lost through deforestation, and 7 were combined with an adjacent patch. In addition, 180 patches were reduced in size. In the urban core, deforestation continued with conversion to commercial land use. Because of the lack of vegetation, commercial land uses are problematic for both species conservation and derived ecosystem benefits. In the lower subsection, shape complexity increased for tree-covered patches less than 10 ha. Changes in shape resulted from canopy expansion, planted materials, and reforestation of vacant sites. In the middle and upper subsections, the shape index value for tree-covered patches decreased, indicating simplification. Density analyses of the subsections showed no change with respect to patch densities but pointed out the importance of small patches (≤5 ha) as "stepping stone" to link large patches (e. g., ≥100 ha). Using an urban forest effect model, we estimated, for the entire watershed, total carbon loss and pollution removal, from 1994 to 1999, to be 14,235,889.2 kg and 13,011.4 kg, respectively due to urban land-use conversions.
Resumo:
Objectives Given increasing trends of obesity being noted from early in life and that active lifestyles track across time, it is important that children at a very young age be active to combat a foundation of unhealthy behaviours forming. This study investigated, within a theory of planned behaviour (TPB) framework, factors which influence mothers’ decisions about their child’s 1) adequate physical activity (PA) and 2) limited screen time behaviours. Methods Mothers (N = 162) completed a main questionnaire, via on-line or paper-based administration, which comprised standard TPB items in addition to measures of planning and background demographic variables. One week later, consenting mothers completed a follow-up telephone questionnaire which assessed the decisions they had made regarding their child’s PA and screen time behaviours during the previous week. Results Hierarchical multiple regression analyses revealed support for the predictive model, explaining an overall 73% and 78% of the variance in mothers’ intention and 38% and 53% of the variance in mothers’ decisions to ensure their child engages in adequate PA and limited screen time, respectively. Attitude and subjective norms predicted intention in both target behaviours, as did intentions with behaviour. Contrary to predictions, perceived behavioural control (PBC) in PA behaviour and planning in screen time behaviour were not significant predictors of intention, neither was PBC a predictor of either behaviour. Conclusions The findings illustrate the various roles that psycho-social factors play in mothers’ decisions to ensure their child engages in active lifestyle behaviours which can help to inform future intervention programs aimed at combating very young children’s inactivity.
Resumo:
The importance of repair, maintenance, minor alteration, and addition (RMAA) works is increasing in many built societies. When the volume of RMAA works increases, the occurrence of RMAA accidents also increases. Safety of RMAA works deserves more attention; however, research in this important topic remains limited. Safety climate is considered a key factor that influences safety performance. The present study aims to determine the relationships between safety climate and safety performance of RMAA works, thereby offering recommendations on improving RMAA safety. Questionnaires were dispatched to private property management companies, maintenance sections of quasi-government developers and their subcontractors, RMAA sections of general contractors, small RMAA contractors, building services contractors and trade unions in Hong Kong. In total, data from 396 questionnaires were collected from RMAA workers. The sample was divided into two equal-sized sub-samples. On the first sub-sample SEM was used to test the model, which was validated on the second sub-sample. The model revealed a significant negative relationship between RMAA safety climate and incidence of self-reported near misses and injuries, and significant positive relationships between RMAA safety climate and safety participation and safety compliance respectively. Higher RMAA safety climate was positively associated with a lower incidence of self-reported near misses and injuries and higher levels of safety participation and safety compliance.
Resumo:
In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users’ information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations.
Resumo:
The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical applications.
Resumo:
The incidence of major storm surges in the last decade have dramatically emphasized the immense destructive capabilities of extreme water level events, particularly when driven by severe tropical cyclones. Given this risk, it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood and erosion management, engineering and for future land-use planning and to ensure the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. Australia has a long history of coastal flooding from tropical cyclones. Using a novel integration of two modeling techniques, this paper provides the first estimates of present day extreme water level exceedance probabilities around the whole coastline of Australia, and the first estimates that combine the influence of astronomical tides, storm surges generated by both extra-tropical and tropical cyclones, and seasonal and inter-annual variations in mean sea level. Initially, an analysis of tide gauge records has been used to assess the characteristics of tropical cyclone-induced surges around Australia. However, given the dearth (temporal and spatial) of information around much of the coastline, and therefore the inability of these gauge records to adequately describe the regional climatology, an observationally based stochastic tropical cyclone model has been developed to synthetically extend the tropical cyclone record to 10,000 years. Wind and pressure fields derived for these synthetically generated events have then been used to drive a hydrodynamic model of the Australian continental shelf region with annual maximum water levels extracted to estimate exceedance probabilities around the coastline. To validate this methodology, selected historic storm surge events have been simulated and resultant storm surges compared with gauge records. Tropical cyclone induced exceedance probabilities have been combined with estimates derived from a 61-year water level hindcast described in a companion paper to give a single estimate of present day extreme water level probabilities around the whole coastline of Australia. Results of this work are freely available to coastal engineers, managers and researchers via a web-based tool (www.sealevelrise.info). The described methodology could be applied to other regions of the world, like the US east coast, that are subject to both extra-tropical and tropical cyclones.
Resumo:
Effective machine fault prognostic technologies can lead to elimination of unscheduled downtime and increase machine useful life and consequently lead to reduction of maintenance costs as well as prevention of human casualties in real engineering asset management. This paper presents a technique for accurate assessment of the remnant life of machines based on health state probability estimation technique and historical failure knowledge embedded in the closed loop diagnostic and prognostic system. To estimate a discrete machine degradation state which can represent the complex nature of machine degradation effectively, the proposed prognostic model employed a classification algorithm which can use a number of damage sensitive features compared to conventional time series analysis techniques for accurate long-term prediction. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for the comparison of intelligent diagnostic test using five different classification algorithms. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state probability using the Support Vector Machine (SVM) classifier. The results obtained were very encouraging and showed that the proposed prognostics system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
This paper presents two novel nonlinear models of u-shaped anti-roll tanks for ships, and their linearizations. In addition, a third simplified nonlinear model is presented. The models are derived using Lagrangian mechanics. This formulation not only simplifies the modeling process, but also allows one to obtain models that satisfy energy-related physical properties. The proposed nonlinear models and their linearizations are validated using model-scale experimental data. Unlike other models in the literature, the nonlinear models in this paper are valid for large roll amplitudes. Even at moderate roll angles, the nonlinear models have three orders of magnitude lower mean square error relative to experimental data than the linear models.