954 resultados para Context-aware applications
Resumo:
Access control (AC) is a necessary defense against a large variety of security attacks on the resources of distributed enterprise applications. However, to be effective, AC in some application domains has to be fine-grain, support the use of application-specific factors in authorization decisions, as well as consistently and reliably enforce organization-wide authorization policies across enterprise applications. Because the existing middleware technologies do not provide a complete solution, application developers resort to embedding AC functionality in application systems. This coupling of AC functionality with application logic causes significant problems including tremendously difficult, costly and error prone development, integration, and overall ownership of application software. The way AC for application systems is engineered needs to be changed. In this dissertation, we propose an architectural approach for engineering AC mechanisms to address the above problems. First, we develop a framework for implementing the role-based access control (RBAC) model using AC mechanisms provided by CORBA Security. For those application domains where the granularity of CORBA controls and the expressiveness of RBAC model suffice, our framework addresses the stated problem. In the second and main part of our approach, we propose an architecture for an authorization service, RAD, to address the problem of controlling access to distributed application resources, when the granularity and support for complex policies by middleware AC mechanisms are inadequate. Applying this architecture, we developed a CORBA-based application authorization service (CAAS). Using CAAS, we studied the main properties of the architecture and showed how they can be substantiated by employing CORBA and Java technologies. Our approach enables a wide-ranging solution for controlling the resources of distributed enterprise applications.
Resumo:
The enzyme S-adenosyl-L-homocysteine (AdoHey) hydrolase effects hydrolytic cleavage of AdoHcy to adenosine (Ado) and L-homocysteine (Hcy). The cellular levels of AdoHcy and Hcy are critical because AdoHcy is a potent feedback inhibitor of crucial transmethylation enzymes. Also, elevated plasma levels of Hcy in humans have been shown to be a risk factor in coronary artery disease. On the basis of the previous finding that AdoHcy hydrolase is able to add the enzyme-sequestered water molecule across the 5',6'-double bond of (halo or dihalohomovinyl)-adenosines causing covalent binding inhibition, we designed and synthesized AdoHcy analogues with the 5',6'-olefin motif incorporated in place of the carbon-5' and sulfur atoms. From the available synthetic methods we chose two independent approaches: the first approach was based on the construction of a new C5'- C6' double bond via metathesis reactions, and the second approach was based on the formation of a new C6'-C7' single bond via Pd-catalyzed cross-couplings. Cross-metathesis of the suitably protected 5'-deoxy-5'-methyleneadenosine with racemic 2-amino-5-hexenoate in the presence of Hoveyda-Grubb's catalyst followed by standard deprotection afforded the desired analogue as 5'E isomer of the inseparable mixture of 9'RIS diastereomers. Metathesis of chiral homoallylglycine [(2S)-amino-5-hexenoate] produced AdoHcy analogue with established stereochemistry E at C5'atom and S at C9' atom. The 5'-bromovinyl analogue was synthesized using the brominationdehydrobromination strategy with pyridinium tribromide and DBU. Since literature reports on the Pd-catalyzed monoalkylation of dihaloalkenes (Csp2-Csp3 coupling) were scarce, we were prompted to undertake model studies on Pdcatalyzed coupling between vinyl dihalides and alkyl organometallics. The 1-fluoro-1- haloalkenes were found to undergo Negishi couplings with alkylzinc bromides to give multisubstituted fluoroalkenes. The alkylation was trans-selective affording pure Zfluoroalkenes. The highest yields were obtained with PdCl 2(dppb) catalyst, but the best stereochemical outcome was obtained with less reactive Pd(PPh3)4 . Couplings of 1,1- dichloro-and 1,1-dibromoalkenes with organozinc reagents resulted in the formation of monocoupled 1-halovinyl product.
Resumo:
The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
Highlights of Data Expedition: • Students explored daily observations of local climate data spanning the past 35 years. • Topological Data Analysis, or TDA for short, provides cutting-edge tools for studying the geometry of data in arbitrarily high dimensions. • Using TDA tools, students discovered intrinsic dynamical features of the data and learned how to quantify periodic phenomenon in a time-series. • Since nature invariably produces noisy data which rarely has exact periodicity, students also considered the theoretical basis of almost-periodicity and even invented and tested new mathematical definitions of almost-periodic functions. Summary The dataset we used for this data expedition comes from the Global Historical Climatology Network. “GHCN (Global Historical Climatology Network)-Daily is an integrated database of daily climate summaries from land surface stations across the globe.” Source: https://www.ncdc.noaa.gov/oa/climate/ghcn-daily/ We focused on the daily maximum and minimum temperatures from January 1, 1980 to April 1, 2015 collected from RDU International Airport. Through a guided series of exercises designed to be performed in Matlab, students explore these time-series, initially by direct visualization and basic statistical techniques. Then students are guided through a special sliding-window construction which transforms a time-series into a high-dimensional geometric curve. These high-dimensional curves can be visualized by projecting down to lower dimensions as in the figure below (Figure 1), however, our focus here was to use persistent homology to directly study the high-dimensional embedding. The shape of these curves has meaningful information but how one describes the “shape” of data depends on which scale the data is being considered. However, choosing the appropriate scale is rarely an obvious choice. Persistent homology overcomes this obstacle by allowing us to quantitatively study geometric features of the data across multiple-scales. Through this data expedition, students are introduced to numerically computing persistent homology using the rips collapse algorithm and interpreting the results. In the specific context of sliding-window constructions, 1-dimensional persistent homology can reveal the nature of periodic structure in the original data. I created a special technique to study how these high-dimensional sliding-window curves form loops in order to quantify the periodicity. Students are guided through this construction and learn how to visualize and interpret this information. Climate data is extremely complex (as anyone who has suffered from a bad weather prediction can attest) and numerous variables play a role in determining our daily weather and temperatures. This complexity coupled with imperfections of measuring devices results in very noisy data. This causes the annual seasonal periodicity to be far from exact. To this end, I have students explore existing theoretical notions of almost-periodicity and test it on the data. They find that some existing definitions are also inadequate in this context. Hence I challenged them to invent new mathematics by proposing and testing their own definition. These students rose to the challenge and suggested a number of creative definitions. While autocorrelation and spectral methods based on Fourier analysis are often used to explore periodicity, the construction here provides an alternative paradigm to quantify periodic structure in almost-periodic signals using tools from topological data analysis.
Resumo:
A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.
Resumo:
The analysis of white latex paint is a problem for forensic laboratories because of difficulty in differentiation between samples. Current methods provide limited information that is not suitable for discrimination. Elemental analysis of white latex paints has resulted in 99% discriminating power when using LA-ICP-MS; however, mass spectrometers can be prohibitively expensive and require a skilled operator. A quick, inexpensive, effective method is needed for the differentiation of white latex paints. In this study, LIBS is used to analyze 24 white latex paint samples. LIBS is fast, easy to operate, and has a low cost. Results show that 98.1% of variation can be accounted for via principle component analysis, while Tukey pairwise comparisons differentiated 95.6% with potassium as the elemental ratio, showing that the discrimination capabilities of LIBS are comparable to those of LA-ICP-MS. Due to the many advantages of LIBS, this instrument should be considered a necessity for forensic laboratories.
Resumo:
Capillary electrophoresis (CE) is a modern analytical technique, which is electrokinetic separation generated by high voltage and taken place inside the small capillaries. In this dissertation, several advanced capillary electrophoresis methods are presented using different approaches of CE and UV and mass spectrometry are utilized as the detection methods. Capillary electrochromatography (CEC), as one of the CE modes, is a recent developed technique which is a hybrid of capillary electrophoresis and high performance liquid chromatography (HPLC). Capillary electrochromatography exhibits advantages of both techniques. In Chapter 2, monolithic capillary column are fabricated using in situ photoinitiation polymerization method. The column was then applied for the separation of six antidepressant compounds. Meanwhile, a simple chiral separation method is developed and presented in Chapter 3. Beta cycodextrin was utilized to achieve the goal of chiral separation. Not only twelve cathinone analytes were separated, but also isomers of several analytes were enantiomerically separated. To better understand the molecular information on the analytes, the TOF-MS system was coupled with the CE. A sheath liquid and a partial filling technique (PFT) were employed to reduce the contamination of MS ionization source. Accurate molecular information was obtained. It is necessary to propose, develop, and optimize new techniques that are suitable for trace-level analysis of samples in forensic, pharmaceutical, and environmental applications. Capillary electrophoresis (CE) was selected for this task, as it requires lower amounts of samples, it simplifies sample preparation, and it has the flexibility to perform separations of neutral and charged molecules as well as enantiomers. Overall, the study demonstrates the versatility of capillary electrophoresis methods in forensic, pharmaceutical, and environmental applications.
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
The internal combustion (IC) engines exploits only about 30% of the chemical energy ejected through combustion, whereas the remaining part is rejected by means of cooling system and exhausted gas. Nowadays, a major global concern is finding sustainable solutions for better fuel economy which in turn results in a decrease of carbon dioxide (CO2) emissions. The Waste Heat Recovery (WHR) is one of the most promising techniques to increase the overall efficiency of a vehicle system, allowing the recovery of the heat rejected by the exhaust and cooling systems. In this context, Organic Rankine Cycles (ORCs) are widely recognized as a potential technology to exploit the heat rejected by engines to produce electricity. The aim of the present paper is to investigate a WHR system, designed to collect both coolant and exhausted gas heats, coupled with an ORC cycle for vehicle applications. In particular, a coolant heat exchanger (CLT) allows the heat exchange between the water coolant and the ORC working fluid, whereas the exhausted gas heat is recovered by using a secondary circuit with diathermic oil. By using an in-house numerical model, a wide range of working conditions and ORC design parameters are investigated. In particular, the analyses are focused on the regenerator location inside the ORC circuits. Five organic fluids, working in both subcritical and supercritical conditions, have been selected in order to detect the most suitable configuration in terms of energy and exergy efficiencies.
Resumo:
Human societies are reliant on the functioning of the hydrologic cycle. The atmospheric branch of this cycle, often referred to as moisture recycling in the context of land-to-land exchange, refers to water evaporating, traveling through the atmosphere, and falling out as precipitation. Similar to the surface water cycle that uses the watershed as the unit of analysis, it is also possible to consider a ‘watershed of the sky’ for the atmospheric water cycle. Thus, I explore the precipitationshed - defined as the upwind surface of the Earth that provides evaporation that later falls as precipitation in a specific place. The primary contributions of this dissertation are to (a) introduce the precipitationshed concept, (b) provide a quantitative basis for the study of the precipitationshed, and (c) demonstrate its use in the fields of hydrometeorology, land-use change, social-ecological systems, ecosystem services, and environmental governance. In Paper I, the concept of the precipitationshed is introduced and explored for the first time. The quantification of precipitationshed variability is described in Paper II, and the key finding is that the precipitationsheds for multiple regions are persistent in time and space. Moisture recycling is further described as an ecosystem service in Paper III, to integrate the concept into the existing language of environmental sustainability and management. That is, I identify regions where vegetation more strongly regulates the provision of atmospheric water, as well as the regions that more strongly benefit from this regulation. In Paper IV, the precipitationshed is further explored through the lens of urban reliance on moisture recycling. Using a novel method, I quantify the vulnerability of urban areas to social-ecological changes within their precipitationsheds. In Paper V, I argue that successful moisture recycling governance will require flexible, transboundary institutions that are capable of operating within complex social-ecological systems. I conclude that, in the future, the precipitationshed can be a key tool in addressing the complexity of social-ecological systems.
Resumo:
Green energy and Green technology are the most of the quoted terms in the context of modern science and technology. Technology which is close to nature is the necessity of the modern world which is haunted by global warming and climatic alterations. Proper utilization of solar energy is one of the goals of Green Energy Movement. The present thesis deals with the work carried out in the eld of nanotechnology and its possible use in various applications (employing natural dyes) like solar cells. Unlike arti cial dyes, the natural dyes are available, easy to prepare, low in cost, non-toxic, environmentally friendly and fully biodegradable. Looking to the 21st century, the nano/micro sciences will be a chief contributor to scienti c and technological developments. As nanotechnology progresses and complex nanosystems are fabricated, a growing impetus is being given to the development of multi-functional and size-dependent materials. The control of the morphology, from the nano to the micrometer scales, associated with the incorporation of several functionalities can yield entirely new smart hybrid materials. They are special class of materials which provide a new method for the improvement of the environmental stability of the material with interesting optical properties and opening a land of opportunities for applications in the eld of photonics. Zinc oxide (ZnO) is one such multipurpose material that has been explored for applications in sensing, environmental monitoring, and bio-medical systems and communications technology. Understanding the growth mechanism and tailoring their morphology is essential for the use of ZnO crystals as nano/micro electromechanical systems and also as building blocks of other nanosystems.