977 resultados para binary data
Resumo:
The research presented in this dissertation is comprised of several parts which jointly attain the goal of Semantic Distributed Database Management with Applications to Internet Dissemination of Environmental Data. ^ Part of the research into more effective and efficient data management has been pursued through enhancements to the Semantic Binary Object-Oriented database (Sem-ODB) such as more effective load balancing techniques for the database engine, and the use of Sem-ODB as a tool for integrating structured and unstructured heterogeneous data sources. Another part of the research in data management has pursued methods for optimizing queries in distributed databases through the intelligent use of network bandwidth; this has applications in networks that provide varying levels of Quality of Service or throughput. ^ The application of the Semantic Binary database model as a tool for relational database modeling has also been pursued. This has resulted in database applications that are used by researchers at the Everglades National Park to store environmental data and to remotely-sensed imagery. ^ The areas of research described above have contributed to the creation TerraFly, which provides for the dissemination of geospatial data via the Internet. TerraFly research presented herein ranges from the development of TerraFly's back-end database and interfaces, through the features that are presented to the public (such as the ability to provide autopilot scripts and on-demand data about a point), to applications of TerraFly in the areas of hazard mitigation, recreation, and aviation. ^
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
Experiments were conducted to show the effects of thermal and geometric boundary conditions on the liquid pool of a binary alloy system which is undergoing phase change, solidification. Transparent analogue solutions were selected for study and experimental apparatus were designed and built. Thermal distribution and concentration data were collected and analysed for the melt pool of various selected geometries and boundary conditions of the systems under study. The data indicate-that characteristic flows develop for both Hypereutectic and Hypoeutectic concentration levels and that the development of macrosegregation and microsegregation defects in continuous casting materials can be minimised by the adjustment of the process variables.
Resumo:
Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Der Müller und die fünf Räuber, Überfall²³
Resumo:
Tests for dependence of continuous, discrete and mixed continuous-discrete variables are ubiquitous in science. The goal of this paper is to derive Bayesian alternatives to frequentist null hypothesis significance tests for dependence. In particular, we will present three Bayesian tests for dependence of binary, continuous and mixed variables. These tests are nonparametric and based on the Dirichlet Process, which allows us to use the same prior model for all of them. Therefore, the tests are “consistent” among each other, in the sense that the probabilities that variables are dependent computed with these tests are commensurable across the different types of variables being tested. By means of simulations with artificial data, we show the effectiveness of the new tests.
Resumo:
The objective of this study was to determine if a high Tg polymer (Eudragit® S100) could be used to stabilize amorphous domains of polyethylene oxide (PEO) and hence improve the stability of binary polymer systems containing celecoxib (CX). We propose a novel method of stabilizing the amorphous PEO solid dispersion through inclusion of a miscible, high Tg polymer, namely, that can form strong inter-polymer interactions. The effects of inter-polymer interactions and miscibility between PEO and Eudragit S100 are considered. Polymer blends were first manufactured via hot-melt extrusion at different PEO/S100 ratios (70/30, 50/50, and 30/70 wt/wt). Differential scanning calorimetry and dynamic mechanical thermal analysis data suggested a good miscibility between PEO and S100 polymer blends, particularly at the 50/50 ratio. To further evaluate the system, CX/PEO/S100 ternary mixtures were extruded. Immediately after hot-melt extrusion, a single Tg that increased with increasing S100 content (anti-plasticization) was observed in all ternary systems. The absence of powder X-ray diffractometry crystalline Bragg’s peaks also suggested amorphization of CX. Upon storage (40°C/75% relative humidity), the formulation containing PEO/S100 at a ratio of 50:50 was shown to be most stable. Fourier transform infrared studies confirmed the presence of hydrogen bonding between Eudragit S100 and PEO suggesting this was the principle reason for stabilization of the amorphous CX/PEO solid dispersion system.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Increasing the size of training data in many computer vision tasks has shown to be very effective. Using large scale image datasets (e.g. ImageNet) with simple learning techniques (e.g. linear classifiers) one can achieve state-of-the-art performance in object recognition compared to sophisticated learning techniques on smaller image sets. Semantic search on visual data has become very popular. There are billions of images on the internet and the number is increasing every day. Dealing with large scale image sets is intense per se. They take a significant amount of memory that makes it impossible to process the images with complex algorithms on single CPU machines. Finding an efficient image representation can be a key to attack this problem. A representation being efficient is not enough for image understanding. It should be comprehensive and rich in carrying semantic information. In this proposal we develop an approach to computing binary codes that provide a rich and efficient image representation. We demonstrate several tasks in which binary features can be very effective. We show how binary features can speed up large scale image classification. We present learning techniques to learn the binary features from supervised image set (With different types of semantic supervision; class labels, textual descriptions). We propose several problems that are very important in finding and using efficient image representation.
Resumo:
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome
Resumo:
Background and Purpose—Vascular prevention trials mostly count “yes/no” (binary) outcome events, eg, stroke/no stroke. Analysis of ordered categorical vascular events (eg, fatal stroke/nonfatal stroke/no stroke) is clinically relevant and could be more powerful statistically. Although this is not a novel idea in the statistical community, ordinal outcomes have not been applied to stroke prevention trials in the past. Methods—Summary data on stroke, myocardial infarction, combined vascular events, and bleeding were obtained by treatment group from published vascular prevention trials. Data were analyzed using 10 statistical approaches which allow comparison of 2 ordinal or binary treatment groups. The results for each statistical test for each trial were then compared using Friedman 2-way analysis of variance with multiple comparison procedures. Results—Across 85 trials (335 305 subjects) the test results differed substantially so that approaches which used the ordinal nature of stroke events (fatal/nonfatal/no stroke) were more efficient than those which combined the data to form 2 groups (P0.0001). The most efficient tests were bootstrapping the difference in mean rank, Mann–Whitney U test, and ordinal logistic regression; 4- and 5-level data were more efficient still. Similar findings were obtained for myocardial infarction, combined vascular outcomes, and bleeding. The findings were consistent across different types, designs and sizes of trial, and for the different types of intervention. Conclusions—When analyzing vascular events from prevention trials, statistical tests which use ordered categorical data are more efficient and are more likely to yield reliable results than binary tests. This approach gives additional information on treatment effects by severity of event and will allow trials to be smaller. (Stroke. 2008;39:000-000.)
Resumo:
Isobaric vapor-liquid equilibria of binary mixtures of isopropyl acetate plus an alkanol (1-propanol, 2-propanol, 1-butanol, or 2-butanol) were measured at 101.32 kPa, using a dynamic recirculating still. An azeotropic behavior was observed only in the mixtures of isopropyl acetate + 2-propanol and isopropyl acetate + 1-propanol. The application of four thermodynamic consistency tests (the Herington test, the Van Ness test, the infinite dilution test, and the pure component test) showed the high quality of the experimental data. Finally, both NRTL and UNIQUAC activity coefficient models were successfully applied in the correlation of the measured data, with the average absolute deviations in vapor phase composition and temperature of 0.01 and 0.16 K, respectively.
Resumo:
Background and Aim: Maternal morbidity and mortality statistics remain unacceptably high in Malawi. Prominent among the risk factors in the country is anaemia in pregnancy, which generally results from nutritional inadequacy (particularly iron deficiency) and malaria, among other factors. This warrants concerted efforts to increase iron intake among reproductive-age women. This study, among women in Malawi, examined factors determining intake of supplemental iron for at least 90 days during pregnancy. Methods: A weighted sample of 10,750 women (46.7%), from the 23,020 respondents of the 2010 Malawi Demographic and Health Survey (MDHS), were utilized for the study. Univariate, bivariate, and regression techniques were employed. While univariate analysis revealed the percent distributions of all variables, bivariate analysis was used to examine the relationships between individual independent variables and adherence to iron supplementation. Chi-square tests of independence were conducted for categorical variables, with the significance level set at P < 0.05. Two binary logistic regression models were used to evaluate the net effect of independent variables on iron supplementation adherence. Results: Thirty-seven percent of the women adhered to the iron supplementation recommendations during pregnancy. Multivariate analysis indicated that younger age, urban residence, higher education, higher wealth status, and attending antenatal care during the first trimester were significantly associated with increased odds of taking iron supplementation for 90 days or more during pregnancy (P < 0.01). Conclusions: The results indicate low adherence to the World Health Organization’s iron supplementation recommendations among pregnant women in Malawi, and this contributes to negative health outcomes for both mothers and children. Focusing on education interventions that target populations with low rates of iron supplement intake, including campaigns to increase the number of women who attend antenatal care clinics in the first trimester, are recommended to increase adherence to iron supplementation recommendations.
Resumo:
The most recent submarine eruption observed offshore the Azores archipelago occurred between 1998-2001 along the submarine Serreta ridge (SSR), ~4-5 nautical miles WNW of Terceira Island. This submarine eruption delivered abundant basaltic lava balloons floating at the sea surface and significantly changed the bathymetry around the eruption area. Our work combines bathymetry, volcanic facies cartography, petrography, rock magnetism and geochemistry in order to (1) track the possible vent source at seabed, (2) better constrain the Azores magma source(s) sampled through the Serreta submarine volcanic event, and (3) interpret the data within the small-scale mantle source heterogeneity framework that has been demonstrated for the Azores archipelago. Lava balloons sampled at sea surface display a radiogenic signature, which is also correlated with relatively primitive (low) 4He/3He isotopic ratios. Conversely, SSR lavas are characterized by significantly lower radiogenic 87Sr/86Sr, 206Pb/204Pb and 208Pb/204Pb ratios than the lava balloons and the onshore lavas from the Terceira Island. SSR lavas are primitive, but incompatible trace-enriched. Apparent decoupling between the enriched incompatible trace element abundances and depleted radiogenic isotope ratios is best explained by binary mixing of a depleted MORB source and a HIMUtype component into magma batches that evolved by similar shallower processes in their travel to the surface. The collected data suggest that the freshest samples collected in the SSR may correspond to volcanic products of an unnoticed and more recent eruption than the 1998-2001 episode.